923 resultados para parabolic-elliptic equation, inverse problems, factorization method
Resumo:
We scrutinize the concept of integrable nonlinear communication channels, resurrecting and extending the idea of eigenvalue communications in a novel context of nonsoliton coherent optical communications. Using the integrable nonlinear Schrödinger equation as a channel model, we introduce a new approach - the nonlinear inverse synthesis method - for digital signal processing based on encoding the information directly onto the nonlinear signal spectrum. The latter evolves trivially and linearly along the transmission line, thus, providing an effective eigenvalue division multiplexing with no nonlinear channel cross talk. The general approach is illustrated with a coherent optical orthogonal frequency division multiplexing transmission format. We show how the strategy based upon the inverse scattering transform method can be geared for the creation of new efficient coding and modulation standards for the nonlinear channel. © Published by the American Physical Society.
Resumo:
In linear communication channels, spectral components (modes) defined by the Fourier transform of the signal propagate without interactions with each other. In certain nonlinear channels, such as the one modelled by the classical nonlinear Schrödinger equation, there are nonlinear modes (nonlinear signal spectrum) that also propagate without interacting with each other and without corresponding nonlinear cross talk, effectively, in a linear manner. Here, we describe in a constructive way how to introduce such nonlinear modes for a given input signal. We investigate the performance of the nonlinear inverse synthesis (NIS) method, in which the information is encoded directly onto the continuous part of the nonlinear signal spectrum. This transmission technique, combined with the appropriate distributed Raman amplification, can provide an effective eigenvalue division multiplexing with high spectral efficiency, thanks to highly suppressed channel cross talk. The proposed NIS approach can be integrated with any modulation formats. Here, we demonstrate numerically the feasibility of merging the NIS technique in a burst mode with high spectral efficiency methods, such as orthogonal frequency division multiplexing and Nyquist pulse shaping with advanced modulation formats (e.g., QPSK, 16QAM, and 64QAM), showing a performance improvement up to 4.5 dB, which is comparable to results achievable with multi-step per span digital back propagation.
Resumo:
The nonlinear inverse synthesis (NIS) method, in which information is encoded directly onto the continuous part of the nonlinear signal spectrum, has been proposed recently as a promising digital signal processing technique for combating fiber nonlinearity impairments. However, because the NIS method is based on the integrability property of the lossless nonlinear Schrödinger equation, the original approach can only be applied directly to optical links with ideal distributed Raman amplification. In this paper, we propose and assess a modified scheme of the NIS method, which can be used effectively in standard optical links with lumped amplifiers, such as, erbium-doped fiber amplifiers (EDFAs). The proposed scheme takes into account the average effect of the fiber loss to obtain an integrable model (lossless path-averaged model) to which the NIS technique is applicable. We found that the error between lossless pathaveraged and lossy models increases linearly with transmission distance and input power (measured in dB). We numerically demonstrate the feasibility of the proposed NIS scheme in a burst mode with orthogonal frequency division multiplexing (OFDM) transmission scheme with advanced modulation formats (e.g., QPSK, 16QAM, and 64QAM), showing a performance improvement up to 3.5 dB; these results are comparable to those achievable with multi-step per span digital backpropagation.
Resumo:
MSC 2010: 26A33, 33E12, 34K29, 34L15, 35K57, 35R30
Resumo:
We study the Dirichlet to Neumann operator for the Riemannian wave equation on a compact Riemannian manifold. If the Riemannian manifold is modelled as an elastic medium, this operator represents the data available to an observer on the boundary of the manifold when the manifold is set into motion through boundary vibrations. We study the Dirichlet to Neumann operator when vibrations are imposed and data recorded on disjoint sets, a useful setting for applications. We prove that this operator determines the Dirichlet to Neumann operator where sources and observations are on the same set, provided a spectral condition on the Laplace-Beltrami operator for the manifold is satisfied. We prove this by providing an implementable procedure for determining a portion of the Riemannian manifold near the area where sources are applied. Drawing on established results, an immediate corollary is that a compact Riemannian manifold can be reconstructed from the Dirichlet to Neumann operator where sources and observations are on disjoint sets.
Resumo:
We study the Dirichlet to Neumann operator for the Riemannian wave equation on a compact Riemannian manifold. If the Riemannian manifold is modelled as an elastic medium, this operator represents the data available to an observer on the boundary of the manifold when the manifold is set into motion through boundary vibrations. We study the Dirichlet to Neumann operator when vibrations are imposed and data recorded on disjoint sets, a useful setting for applications. We prove that this operator determines the Dirichlet to Neumann operator where sources and observations are on the same set, provided a spectral condition on the Laplace-Beltrami operator for the manifold is satisfied. We prove this by providing an implementable procedure for determining a portion of the Riemannian manifold near the area where sources are applied. Drawing on established results, an immediate corollary is that a compact Riemannian manifold can be reconstructed from the Dirichlet to Neumann operator where sources and observations are on disjoint sets.
Resumo:
We develop the energy norm a-posteriori error estimation for hp-version discontinuous Galerkin (DG) discretizations of elliptic boundary-value problems on 1-irregularly, isotropically refined affine hexahedral meshes in three dimensions. We derive a reliable and efficient indicator for the errors measured in terms of the natural energy norm. The ratio of the efficiency and reliability constants is independent of the local mesh sizes and weakly depending on the polynomial degrees. In our analysis we make use of an hp-version averaging operator in three dimensions, which we explicitly construct and analyze. We use our error indicator in an hp-adaptive refinement algorithm and illustrate its practical performance in a series of numerical examples. Our numerical results indicate that exponential rates of convergence are achieved for problems with smooth solutions, as well as for problems with isotropic corner singularities.
A hybrid Particle Swarm Optimization - Simplex algorithm (PSOS) for structural damage identification
Resumo:
This study proposes a new PSOS-model based damage identification procedure using frequency domain data. The formulation of the objective function for the minimization problem is based on the Frequency Response Functions (FRFs) of the system. A novel strategy for the control of the Particle Swarm Optimization (PSO) parameters based on the Nelder-Mead algorithm (Simplex method) is presented; consequently, the convergence of the PSOS becomes independent of the heuristic constants and its stability and confidence are enhanced. The formulated hybrid method performs better in different benchmark functions than the Simulated Annealing (SA) and the basic PSO (PSO(b)). Two damage identification problems, taking into consideration the effects of noisy and incomplete data, were studied: first, a 10-bar truss and second, a cracked free-free beam, both modeled with finite elements. In these cases, the damage location and extent were successfully determined. Finally, a non-linear oscillator (Duffing oscillator) was identified by PSOS providing good results. (C) 2009 Elsevier Ltd. All rights reserved
Resumo:
We develop a new iterative filter diagonalization (FD) scheme based on Lanczos subspaces and demonstrate its application to the calculation of bound-state and resonance eigenvalues. The new scheme combines the Lanczos three-term vector recursion for the generation of a tridiagonal representation of the Hamiltonian with a three-term scalar recursion to generate filtered states within the Lanczos representation. Eigenstates in the energy windows of interest can then be obtained by solving a small generalized eigenvalue problem in the subspace spanned by the filtered states. The scalar filtering recursion is based on the homogeneous eigenvalue equation of the tridiagonal representation of the Hamiltonian, and is simpler and more efficient than our previous quasi-minimum-residual filter diagonalization (QMRFD) scheme (H. G. Yu and S. C. Smith, Chem. Phys. Lett., 1998, 283, 69), which was based on solving for the action of the Green operator via an inhomogeneous equation. A low-storage method for the construction of Hamiltonian and overlap matrix elements in the filtered-basis representation is devised, in which contributions to the matrix elements are computed simultaneously as the recursion proceeds, allowing coefficients of the filtered states to be discarded once their contribution has been evaluated. Application to the HO2 system shows that the new scheme is highly efficient and can generate eigenvalues with the same numerical accuracy as the basic Lanczos algorithm.
Resumo:
A package of B-spline finite strip models is developed for the linear analysis of piezolaminated plates and shells. This package is associated to a global optimization technique in order to enhance the performance of these types of structures, subjected to various types of objective functions and/or constraints, with discrete and continuous design variables. The models considered are based on a higher-order displacement field and one can apply them to the static, free vibration and buckling analyses of laminated adaptive structures with arbitrary lay-ups, loading and boundary conditions. Genetic algorithms, with either binary or floating point encoding of design variables, were considered to find optimal locations of piezoelectric actuators as well as to determine the best voltages applied to them in order to obtain a desired structure shape. These models provide an overall economy of computing effort for static and vibration problems.
Resumo:
In this work we perform a comparison of two different numerical schemes for the solution of the time-fractional diffusion equation with variable diffusion coefficient and a nonlinear source term. The two methods are the implicit numerical scheme presented in [M.L. Morgado, M. Rebelo, Numerical approximation of distributed order reaction- diffusion equations, Journal of Computational and Applied Mathematics 275 (2015) 216-227] that is adapted to our type of equation, and a colocation method where Chebyshev polynomials are used to reduce the fractional differential equation to a system of ordinary differential equations
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
The inversion problem concerning the windowed Fourier transform is considered. It is shown that, out of the infinite solutions that the problem admits, the windowed Fourier transform is the "optimal" solution according to a maximum-entropy selection criterion.