874 resultados para Objective function values
Resumo:
Neste trabalho é apresentado a aplicação de um método de otimização a fim de estimar parâmetros que normalmente estão presentes na modelagem matemática da dinâmica de espécies químicas na interface água-sedimento. O Problema Direto aqui consistiu na simulação das concentrações das espécies orgânicas e inorgânicas (amônia e nitrato) de nitrogênio, num ambiente idealizado, o qual foi fracionado em quatro camadas: uma camada de água (1 metro) e três camadas de sedimento (0-1 cm, 1-2 cm e 2-10 cm). O Problema Direto foi resolvido pelo Método de Runge Kutta, tendo sido gerada uma simulação de 50 dias. Na estimativa dos coeficientes de difusão e porosidade foi aplicado o Método Simulated Annealing (SA). A eficiência da estratégia aqui adotada foi avaliada através do confronto entre dados experimentais sintéticos e as concentrações calçadas pela solução do Problema Direto, adotando-se os parâmetros estimados pela SA. O melhor ajuste entre dados experimentais e valores calculados se deu quando o parâmetro estimado foi a porosidade. Com relação à minimização da função objetivo, a estimativa desse parâmetro também foi a que exigiu menor esforço computacional. Após a introdução de um ruído randômico às concentrações das espécies nitrogenadas, a técnica SA não foi capaz de obter uma estimativa satisfatória para o coeficiente de difusão, com exceção da camada 0-1 cm sedimentar. Para outras camadas, erros da ordem de 10 % foram encontrados (para amônia na coluna dágua, pro exemplo). Os resultados mostraram que a metodologia aqui adotada pode ser bastante promissora enquanto ferramenta de gestão de corpos dágua, especialmente daqueles submetidos a um regime de baixa energia, como lagos e lagoas costeiras.
Resumo:
Redes de trocadores de calor são bastante utilizadas na indústria química para promover a integração energética do processo, recuperando calor de correntes quentes para aquecer correntes frias. Estas redes estão sujeitas à deposição, o que causa um aumento na resistência à transferência de calor, prejudicando-a. Uma das principais formas de diminuir o prejuízo causado por este fenômeno é a realização periódica de limpezas nos trocadores de calor. O presente trabalho tem como objetivo desenvolver um novo método para encontrar a programação ótima das limpezas em uma rede de trocadores de calor. O método desenvolvido utiliza o conceito de horizonte deslizante associado a um problema de programação linear inteira mista (MILP). Este problema MILP é capaz de definir o conjunto ótimo de trocadores de calor a serem limpos em um determinado instante de tempo (primeiro instante do horizonte deslizante), levando em conta sua influência nos instantes futuros (restante do horizonte deslizante). O problema MILP utiliza restrições referentes aos balanços de energia, equações de trocadores de calor e número máximo de limpezas simultâneas, com o objetivo de minimizar o consumo de energia da planta. A programação ótima das limpezas é composta pela combinação dos resultados obtidos em cada um dos instantes de tempo.O desempenho desta abordagem foi analisado através de sua aplicação em diversos exemplos típicos apresentados na literatura, inclusive um exemplo de grande porte de uma refinaria brasileira. Os resultados mostraram que a abordagem aplicada foi capaz de prover ganhos semelhantes e, algumas vezes, superiores aos da literatura, indicando que o método desenvolvido é capaz de fornecer bons resultados com um baixo esforço computacional
Resumo:
Diversas das possíveis aplicações da robótica de enxame demandam que cada robô seja capaz de estimar a sua posição. A informação de localização dos robôs é necessária, por exemplo, para que cada elemento do enxame possa se posicionar dentro de uma formatura de robôs pré-definida. Da mesma forma, quando os robôs atuam como sensores móveis, a informação de posição é necessária para que seja possível identificar o local dos eventos medidos. Em virtude do tamanho, custo e energia dos dispositivos, bem como limitações impostas pelo ambiente de operação, a solução mais evidente, i.e. utilizar um Sistema de Posicionamento Global (GPS), torna-se muitas vezes inviável. O método proposto neste trabalho permite que as posições absolutas de um conjunto de nós desconhecidos sejam estimadas, com base nas coordenadas de um conjunto de nós de referência e nas medidas de distância tomadas entre os nós da rede. A solução é obtida por meio de uma estratégia de processamento distribuído, onde cada nó desconhecido estima sua própria posição e ajuda os seus vizinhos a calcular as suas respectivas coordenadas. A solução conta com um novo método denominado Multi-hop Collaborative Min-Max Localization (MCMM), ora proposto com o objetivo de melhorar a qualidade da posição inicial dos nós desconhecidos em caso de falhas durante o reconhecimento dos nós de referência. O refinamento das posições é feito com base nos algoritmos de busca por retrocesso (BSA) e de otimização por enxame de partículas (PSO), cujos desempenhos são comparados. Para compor a função objetivo, é introduzido um novo método para o cálculo do fator de confiança dos nós da rede, o Fator de Confiança pela Área Min-Max (MMA-CF), o qual é comparado com o Fator de Confiança por Saltos às Referências (HTA-CF), previamente existente. Com base no método de localização proposto, foram desenvolvidos quatro algoritmos, os quais são avaliados por meio de simulações realizadas no MATLABr e experimentos conduzidos em enxames de robôs do tipo Kilobot. O desempenho dos algoritmos é avaliado em problemas com diferentes topologias, quantidades de nós e proporção de nós de referência. O desempenho dos algoritmos é também comparado com o de outros algoritmos de localização, tendo apresentado resultados 40% a 51% melhores. Os resultados das simulações e dos experimentos demonstram a eficácia do método proposto.
Resumo:
We propose an algorithm to perform multitask learning where each task has potentially distinct label sets and label correspondences are not readily available. This is in contrast with existing methods which either assume that the label sets shared by different tasks are the same or that there exists a label mapping oracle. Our method directly maximizes the mutual information among the labels, and we show that the resulting objective function can be efficiently optimized using existing algorithms. Our proposed approach has a direct application for data integration with different label spaces, such as integrating Yahoo! and DMOZ web directories.
Resumo:
Understanding the guiding principles of sensory coding strategies is a main goal in computational neuroscience. Among others, the principles of predictive coding and slowness appear to capture aspects of sensory processing. Predictive coding postulates that sensory systems are adapted to the structure of their input signals such that information about future inputs is encoded. Slow feature analysis (SFA) is a method for extracting slowly varying components from quickly varying input signals, thereby learning temporally invariant features. Here, we use the information bottleneck method to state an information-theoretic objective function for temporally local predictive coding. We then show that the linear case of SFA can be interpreted as a variant of predictive coding that maximizes the mutual information between the current output of the system and the input signal in the next time step. This demonstrates that the slowness principle and predictive coding are intimately related.
Resumo:
In this paper we develop a new approach to sparse principal component analysis (sparse PCA). We propose two single-unit and two block optimization formulations of the sparse PCA problem, aimed at extracting a single sparse dominant principal component of a data matrix, or more components at once, respectively. While the initial formulations involve nonconvex functions, and are therefore computationally intractable, we rewrite them into the form of an optimization program involving maximization of a convex function on a compact set. The dimension of the search space is decreased enormously if the data matrix has many more columns (variables) than rows. We then propose and analyze a simple gradient method suited for the task. It appears that our algorithm has best convergence properties in the case when either the objective function or the feasible set are strongly convex, which is the case with our single-unit formulations and can be enforced in the block case. Finally, we demonstrate numerically on a set of random and gene expression test problems that our approach outperforms existing algorithms both in quality of the obtained solution and in computational speed. © 2010 Michel Journée, Yurii Nesterov, Peter Richtárik and Rodolphe Sepulchre.
Resumo:
In view of its special features, the brushless doubly fed induction generator (BDFIG) shows high potentials to be employed as a variable-speed drive or wind generator. However, the machine suffers from low efficiency and power factor and also high level of noise and vibration due to spatial harmonics. These harmonics arise mainly from rotor winding configuration, slotting effects, and saturation. In this paper, analytical equations are derived for spatial harmonics and their effects on leakage flux, additional loss, noise, and vibration. Using the derived equations and an electromagnetic-thermal model, a simple design procedure is presented, while the design variables are selected based on sensitivity analyses. A multiobjective optimization method using an imperialist competitive algorithm as the solver is established to maximize efficiency, power factor, and power-to-weight ratio, as well as to reduce rotor spatial harmonic distortion and voltage regulation simultaneously. Several constraints on dimensions, magnetic flux densities, temperatures, vibration level, and converter voltage and rating are imposed to ensure feasibility of the designed machine. The results show a significant improvement in the objective function. Finally, the analytical results of the optimized structure are validated using finite-element method and are compared to the experimental results of the D180 frame size prototype BDFIG. © 1982-2012 IEEE.
Resumo:
Genetic Algorithms (GAs) were used to design triangular lattice photonic crystals with large absolute band-gap. Considering fabricating issues, the algorithms represented the unit cell with large pixels and took the largest absolute band-gap under the fifth band as the objective function. By integrating Fourier transform data storage mechanism, the algorithms ran efficiently and effectively and optimized a triangular lattice photonic crystal with scatters in the shape of 'dielectric-air rod'. It had a large absolute band gap with relative width (ratio of gap width to midgap) 23.8%.
Resumo:
In this paper, we viewed the diel vertical migration (DVM) of copepod in the context of the animal's immediate behaviors of everyday concerns and constructed an instantaneous behavioral criterion effective for DVM and non-DVM behaviors. This criterion employed the function of 'venturous revenue' (VR), which is the product of the food intake and probability of the survival, to evaluate the gains and losses of the behaviors that the copepod could trade-off. The optimal behaviors are to find the optimal habitats to maximize VR. Two types of VRs are formulated and tested by the theoretical analysis and simulations. The sensed VR, monitoring the real-time changes of trade-offs and thereby determining the optimum habitat, is validated to be the effective objective function for the optimization of the behavior; whereas, the realized VR, quantifying the actual profit obtained by an optimal copepod in the sensed-VR-determined habitat, defines the life history of a specific age cohort. The achievement of a robust copepod overwintering stock through integrating the dynamics of the constituent age cohorts subjected to the instantaneous behavioral criterion for DVM clearly exemplified a possible way bridging the immediate pursuit of an individual and the end success of the population. (c) 2005 Published by Elsevier Ltd.
Resumo:
介绍了一种大量程Stewart结构六维力/力矩传感器系统.建立了传感器力/力矩测量数学模型,构建了基于运动学逆解的优化目标函数.提出了一种新的参数标定法,称作分支轮换标定法.基于所研发的六维力/力矩传感器系统进行了实验研究,实验结果表明分支轮换法可以有效地辨识出传感器的结构参数,提高了测量精度.该方法可用于类似结构的六维力/力矩传感器参数标定.
Resumo:
针对动态不确定环境下移动机器人的路径规划问题,提出了加速度空间中一种基于线性规划(Linear programming,LP)的方法.在机器人的加速度空间中利用相对信息,把机器人路径规划这一非线性问题,描述成满足一组线性约束同时使目标函数极小的线性规划问题,嵌入基于线性规划方法的规划器,得到一条满足性能要求的最优路径.仿真试验验证了算法的实用性及有效性,与势场引导进化计算的方法(Artificial potential guided evolution algorithm,APEA)相比更优化,更实时.
Resumo:
基于Stewart平台的六维力传感器具有结构紧凑、刚度大、量程宽等特点,它在工业机器人、空间站对接等领域具有广泛的应用前景。好的标定方法是正确使用传感器的基础。由于基于Stewart平台的六维力传感器是一个复杂的非线性系统,所以采用常规的线性标定方法必将带来较大的标定误差从而影响其使用性能。标定的实质是,由测量值空间到理论值空间的映射函数的确定过程。由函数逼近理论可知,当只在已知点集上给出函数值时,可用多项式或分段多项式等较简单函数逼近待定函数。基于上述思想,本文将整个测量空间划分为若干连续的子测量空间,再对每个子空间进行线性标定,从而提高了整个测量系统的标定精度。实验分析结果表明了该标定方法有效。
Resumo:
Based on the fractal theories, contractive mapping principles as well as the fixed point theory, by means of affine transform, this dissertation develops a novel Explicit Fractal Interpolation Function(EFIF)which can be used to reconstruct the seismic data with high fidelity and precision. Spatial trace interpolation is one of the important issues in seismic data processing. Under the ideal circumstances, seismic data should be sampled with a uniform spatial coverage. However, practical constraints such as the complex surface conditions indicate that the sampling density may be sparse or for other reasons some traces may be lost. The wide spacing between receivers can result in sparse sampling along traverse lines, thus result in a spatial aliasing of short-wavelength features. Hence, the method of interpolation is of very importance. It not only needs to make the amplitude information obvious but the phase information, especially that of the point that the phase changes acutely. Many people put forward several interpolation methods, yet this dissertation focuses attention on a special class of fractal interpolation function, referred to as explicit fractal interpolation function to improve the accuracy of the interpolation reconstruction and to make the local information obvious. The traditional fractal interpolation method mainly based on the randomly Fractional Brown Motion (FBM) model, furthermore, the vertical scaling factor which plays a critical role in the implementation of fractal interpolation is assigned the same value during the whole interpolating process, so it can not make the local information obvious. In addition, the maximal defect of the traditional fractal interpolation method is that it cannot obtain the function values on each interpolating nodes, thereby it cannot analyze the node error quantitatively and cannot evaluate the feasibility of this method. Detailed discussions about the applications of fractal interpolation in seismology have not been given by the pioneers, let alone the interpolating processing of the single trace seismogram. On the basis of the previous work and fractal theory this dissertation discusses the fractal interpolation thoroughly and the stability of this special kind of interpolating function is discussed, at the same time the explicit presentation of the vertical scaling factor which controls the precision of the interpolation has been proposed. This novel method develops the traditional fractal interpolation method and converts the fractal interpolation with random algorithms into the interpolation with determined algorithms. The data structure of binary tree method has been applied during the process of interpolation, and it avoids the process of iteration that is inevitable in traditional fractal interpolation and improves the computation efficiency. To illustrate the validity of the novel method, this dissertation develops several theoretical models and synthesizes the common shot gathers and seismograms and reconstructs the traces that were erased from the initial section using the explicit fractal interpolation method. In order to compare the differences between the theoretical traces that were erased in the initial section and the resulting traces after reconstruction on waveform and amplitudes quantitatively, each missing traces are reconstructed and the residuals are analyzed. The numerical experiments demonstrate that the novel fractal interpolation method is not only applicable to reconstruct the seismograms with small offset but to the seismograms with large offset. The seismograms reconstructed by explicit fractal interpolation method resemble the original ones well. The waveform of the missing traces could be estimated very well and also the amplitudes of the interpolated traces are a good approximation of the original ones. The high precision and computational efficiency of the explicit fractal interpolation make it a useful tool to reconstruct the seismic data; it can not only make the local information obvious but preserve the overall characteristics of the object investigated. To illustrate the influence of the explicit fractal interpolation method to the accuracy of the imaging of the structure in the earth’s interior, this dissertation applies the method mentioned above to the reverse-time migration. The imaging sections obtained by using the fractal interpolated reflected data resemble the original ones very well. The numerical experiments demonstrate that even with the sparse sampling we can still obtain the high accurate imaging of the earth’s interior’s structure by means of the explicit fractal interpolation method. So we can obtain the imaging results of the earth’s interior with fine quality by using relatively small number of seismic stations. With the fractal interpolation method we will improve the efficiency and the accuracy of the reverse-time migration under economic conditions. To verify the application effect to real data of the method presented in this paper, we tested the method by using the real data provided by the Broadband Seismic Array Laboratory, IGGCAS. The results demonstrate that the accuracy of explicit fractal interpolation is still very high even with the real data with large epicenter and large offset. The amplitudes and the phase of the reconstructed station data resemble the original ones that were erased in the initial section very well. Altogether, the novel fractal interpolation function provides a new and useful tool to reconstruct the seismic data with high precision and efficiency, and presents an alternative to image the deep structure of the earth accurately.
Resumo:
Seismic exploration is the main method of seeking oil and gas. With the development of seismic exploration, the target becomes more and more complex, which leads to a higher demand for the accuracy and efficiency in seismic exploration. Fourier finite-difference (FFD) method is one of the most valuable methods in complex structure exploration, which has obtained good effect. However, in complex media with wider angles, the effect of FFD method is not satisfactory. Based on the FFD operator, we extend the two coefficients to be optimized to four coefficients, then optimize them globally using simulated annealing algorithm. Our optimization method select the solution of one-way wave equation as the objective function. Except the velocity contrast, we consider the effects of both frequency and depth interval. The proposed method can improve the angle of FFD method without additional computation time, which can reach 75° in complex media with large lateral velocity contrasts and wider propagation angles. In this thesis, combinating the FFD method and alternative-direction-implicit plus interpolation(ADIPI) method, we obtain 3D FFD with higher accuracy. On the premise of keeping the efficiency of the FFD method, this method not only removes the azimuthal anisotropy but also optimizes the FFD mehod, which is helpful to 3D seismic exploration. We use the multi-parameter global optimization method to optimize the high order term of FFD method. Using lower-order equation to obtain the approximation effect of higher-order equation, not only decreases the computational cost result from higher-order term, but also obviously improves the accuracy of FFD method. We compare the FFD, SAFFD(multi-parameter simulated annealing globally optimized FFD), PFFD, phase-shift method(PS), globally optimized FFD (GOFFD), and higher-order term optimized FFD method. The theoretical analyses and the impulse responses demonstrate that higher-order term optimized FFD method significantly extends the accurate propagation angle of the FFD method, which is useful to complex media with wider propagation angles.
Resumo:
The theory and approach of the broadband teleseismic body waveform inversion are expatiated in this paper, and the defining the crust structure's methods are developed. Based on the teleseismic P-wave data, the theoretic image of the P-wave radical component is calculated via the convolution of the teleseismic P-wave vertical component and the transform function, and thereby a P-wavefrom inversion method is built. The applied results show the approach effective, stable and its resolution high. The exact and reliable teleseismic P waveforms recorded by CDSN and IRIS and its geodynamics are utilized to obtain China and its vicinage lithospheric transfer functions, this region ithospheric structure is inverted through the inversion of reliable transfer functions, the new knowledge about the deep structure of China and its vicinage is obtained, and the reliable seismological evidence is provided to reveal the geodynamic evolution processes and set up the continental collisional theory. The major studies are as follows: Two important methods to study crustal and upper mantle structure -- body wave travel-time inversion and waveform modeling are reviewed systematically. Based on ray theory, travel-time inversion is characterized by simplicity, crustal and upper mantle velocity model can be obtained by using 1-D travel-time inversion preliminary, which introduces the reference model for studying focal location, focal mechanism, and fine structure of crustal and upper mantle. The large-scale lateral inhomogeneity of crustal and upper mantle can be obtained by three-dimensional t ravel-time seismic tomography. Based on elastic dynamics, through the fitting between theoretical seismogram and observed seismogram, waveform modeling can interpret the detail waveform and further uncover one-dimensional fine structure and lateral variation of crustal and upper mantle, especially the media characteristics of singular zones of ray. Whatever travel-time inversion and waveform modeling is supposed under certain approximate conditions, with respective advantages and disadvantages, and provide convincing structure information for elucidating physical and chemical features and geodynamic processes of crustal and upper mantle. Because the direct wave, surface wave, and refraction wave have lower resolution in investigating seismic velocity transitional zone, which is inadequate to study seismic discontinuities. On the contrary, both the converse and reflected wave, which sample the discontinuities directly, must be carefully picked up from seismogram to constrain the velocity transitional zones. Not only can the converse wave and reflected wave study the crustal structure, but also investigate the upper mantle discontinuities. There are a number of global and regional seismic discontinuities in the crustal and upper mantle, which plays a significant role in understanding physical and chemical properties and geodynamic processes of crustal and upper mantle. The broadband teleseismic P waveform inversion is studied particularly. The teleseismic P waveforms contain a lot of information related to source time function, near-source structure, propagation effect through the mantle, receiver structure, and instrument response, receiver function is isolated form teleseismic P waveform through the vector rotation of horizontal components into ray direction and the deconvolution of vertical component from the radial and tangential components of ground motion, the resulting time series is dominated by local receiver structure effect, and is hardly irrelevant to source and deep mantle effects. Receiver function is horizontal response, which eliminate multiple P wave reflection and retain direct wave and P-S converted waves, and is sensitive to the vertical variation of S wave velocity. Velocity structure beneath a seismic station has different response to radial and vertical component of an accident teleseismic P wave. To avoid the limits caused by a simplified assumption on the vertical response, the receiver function method is mended. In the frequency domain, the transfer function is showed by the ratio of radical response and vertical response of the media to P wave. In the time domain, the radial synthetic waveform can be obtained by the convolution of the transfer function with the vertical wave. In order to overcome the numerical instability, generalized reflection and transmission coefficient matrix method is applied to calculate the synthetic waveform so that all multi-reflection and phase conversion response can be included. A new inversion method, VFSA-LM method, is used in this study, which successfully combines very fast simulated annealing method (VFSA) with damped least square inversion method (LM). Synthetic waveform inversion test confirms its effectiveness and efficiency. Broadband teleseismic P waveform inversion is applied in lithospheric velocity study of China and its vicinage. According to the data of high quality CDSN and IRIS, we obtained an outline map showing the distribution of Asian continental crustal thickness. Based on these results gained, the features of distribution of the crustal thickness and outline of crustal structure under the Asian continent have been analyzed and studied. Finally, this paper advances the principal characteristics of the Asian continental crust. There exist four vast areas of relatively minor variations in the crustal thickness, namely, northern, eastern southern and central areas of Asian crust. As a byproduct, the earthquake location is discussed, Which is a basic issue in seismology. Because of the strong trade-off between the assumed initial time and focal depth and the nonlinear of the inversion problems, this issue is not settled at all. Aimed at the problem, a new earthquake location method named SAMS method is presented, In which, the objective function is the absolute value of the remnants of travel times together with the arrival times and use the Fast Simulated Annealing method is used to inverse. Applied in the Chi-Chi event relocation of Taiwan occurred on Sep 21, 2000, the results show that the SAMS method not only can reduce the effects of the trade-off between the initial time and focal depth, but can get better stability and resolving power. At the end of the paper, the inverse Q filtering method for compensating attenuation and frequency dispersion used in the seismic section of depth domain is discussed. According to the forward and inverse results of synthesized seismic records, our Q filtrating operator of the depth domain is consistent with the seismic laws in the absorbing media, which not only considers the effect of the media absorbing of the waves, but also fits the deformation laws, namely the frequency dispersion of the body wave. Two post stacked profiles about 60KM, a neritic area of China processed, the result shows that after the forward Q filtering of the depth domain, the wide of the wavelet of the middle and deep layers is compressed, the resolution and signal noise ratio are enhanced, and the primary sharp and energy distribution of the profile are retained.