972 resultados para Ill-posed problem


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The image reconstruction problem encountered in diffuse optical tomographic imaging is ill-posed in nature, necessitating the usage of regularization to result in stable solutions. This regularization also results in loss of resolution in the reconstructed images. A frame work, that is attributed by model-resolution, to improve the reconstructed image characteristics using the basis pursuit deconvolution method is proposed here. The proposed method performs this deconvolution as an additional step in the image reconstruction scheme. It is shown, both in numerical and experimental gelatin phantom cases, that the proposed method yields better recovery of the target shapes compared to traditional method, without the loss of quantitativeness of the results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the present paper, the crack identification problems are investigated. This kind of problems belong to the scope of inverse problems and are usually ill-posed on their solutions. The paper includes two parts: (1) Based on the dynamic BIEM and the optimization method and using the measured dynamic information on outer boundary, the identification of crack in a finite domain is investigated and a method for choosing the high sensitive frequency region is proposed successfully to improve the precision. (2) Based on 3-D static BIEM and hypersingular integral equation theory, the penny crack identification in a finite body is reduced to an optimization problem. The investigation gives us some initial understanding on the 3-D inverse problems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we study the issues of modeling, numerical methods, and simulation with comparison to experimental data for the particle-fluid two-phase flow problem involving a solid-liquid mixed medium. The physical situation being considered is a pulsed liquid fluidized bed. The mathematical model is based on the assumption of one-dimensional flows, incompressible in both particle and fluid phases, equal particle diameters, and the wall friction force on both phases being ignored. The model consists of a set of coupled differential equations describing the conservation of mass and momentum in both phases with coupling and interaction between the two phases. We demonstrate conditions under which the system is either mathematically well posed or ill posed. We consider the general model with additional physical viscosities and/or additional virtual mass forces, both of which stabilize the system. Two numerical methods, one of them is first-order accurate and the other fifth-order accurate, are used to solve the models. A change of variable technique effectively handles the changing domain and boundary conditions. The numerical methods are demonstrated to be stable and convergent through careful numerical experiments. Simulation results for realistic pulsed liquid fluidized bed are provided and compared with experimental data. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A number of recent scientific and engineering problems require signals to be decomposed into a product of a slowly varying positive envelope and a quickly varying carrier whose instantaneous frequency also varies slowly over time. Although signal processing provides algorithms for so-called amplitude-and frequency-demodulation (AFD), there are well known problems with all of the existing methods. Motivated by the fact that AFD is ill-posed, we approach the problem using probabilistic inference. The new approach, called probabilistic amplitude and frequency demodulation (PAFD), models instantaneous frequency using an auto-regressive generalization of the von Mises distribution, and the envelopes using Gaussian auto-regressive dynamics with a positivity constraint. A novel form of expectation propagation is used for inference. We demonstrate that although PAFD is computationally demanding, it outperforms previous approaches on synthetic and real signals in clean, noisy and missing data settings.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There are many methods for decomposing signals into a sum of amplitude and frequency modulated sinusoids. In this paper we take a new estimation based approach. Identifying the problem as ill-posed, we show how to regularize the solution by imposing soft constraints on the amplitude and phase variables of the sinusoids. Estimation proceeds using a version of Kalman smoothing. We evaluate the method on synthetic and natural, clean and noisy signals, showing that it outperforms previous decompositions, but at a higher computational cost. © 2012 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper focuses on the problem of incomplete data in the applications of the circular cone-beam computed tomography. This problem is frequently encountered in medical imaging sciences and some other industrial imaging systems. For example, it is crucial when the high density region of objects can only be penetrated by X-rays in a limited angular range. As the projection data are only available in an angular range, the above mentioned incomplete data problem can be attributed to the limited angle problem, which is an ill-posed inverse problem. This paper reports a modified total variation minimisation method to reduce the data insufficiency in tomographic imaging. This proposed method is robust and efficient in the task of reconstruction by showing the convergence of the alternating minimisation method. The results demonstrate that this new reconstruction method brings reasonable performance. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Post-stack seismic impedance inversion is the key technology of reservoir prediction and identification. Geophysicists have done a lot of research for the problem, but the developed methods still cannot satisfy practical requirements completely. The results of different inversion methods are different and the results of one method used by different people are different too. The reasons are due to the quality of seismic data, inaccurate wavelet extraction, errors between normal incidence assumption and real situation, and so on. In addition, there are two main influence factors: one is the band-limited property of seismic data; the other is the ill-posed property of impedance inversion. Thus far, the most effective way to solve the band-limited problem is the constrained inversion. And the most effective way to solve ill-posed problems is the regularization method assisted with proper optimization techniques. This thesis systematically introduces the iterative regularization methods and numerical optimization methods for impedance inversion. A regularized restarted conjugate gradient method for solving ill-posed problems in impedance inversion is proposed. Theoretic simulations are made and field data applications are performed. It reveals that the proposed algorithm possesses the superiority to conventional conjugate gradient method. Finally, non-smooth optimization is proposed as the further research direction in seismic impedance inversion according to practical situation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many problems in early vision are ill posed. Edge detection is a typical example. This paper applies regularization techniques to the problem of edge detection. We derive an optimal filter for edge detection with a size controlled by the regularization parameter $\\ lambda $ and compare it to the Gaussian filter. A formula relating the signal-to-noise ratio to the parameter $\\lambda $ is derived from regularization analysis for the case of small values of $\\lambda$. We also discuss the method of Generalized Cross Validation for obtaining the optimal filter scale. Finally, we use our framework to explain two perceptual phenomena: coarsely quantized images becoming recognizable by either blurring or adding noise.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Inverse diffraction consists in determining the field distribution on a boundary surface from the knowledge of the distribution on a surface situated within the domain where the wave propagates. This problem is a good example for illustrating the use of least-squares methods (also called regularization methods) for solving linear ill-posed inverse problem. We focus on obtaining error bounds For regularized solutions and show that the stability of the restored field far from the boundary surface is quite satisfactory: the error is proportional to ∊(ðŗ‚ ≃ 1) ,ðŗœ being the error in the data (Hölder continuity). However, the error in the restored field on the boundary surface is only proportional to an inverse power of │In∊│ (logarithmic continuity). Such a poor continuity implies some limitations on the resolution which is achievable in practice. In this case, the resolution limit is seen to be about half of the wavelength. Copyright © 1981 by The Institute of Electrical and Electronics Engineers, Inc.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cette thèse de doctorat consiste en trois chapitres qui traitent des sujets de choix de portefeuilles de grande taille, et de mesure de risque. Le premier chapitre traite du problème d’erreur d’estimation dans les portefeuilles de grande taille, et utilise le cadre d'analyse moyenne-variance. Le second chapitre explore l'importance du risque de devise pour les portefeuilles d'actifs domestiques, et étudie les liens entre la stabilité des poids de portefeuille de grande taille et le risque de devise. Pour finir, sous l'hypothèse que le preneur de décision est pessimiste, le troisième chapitre dérive la prime de risque, une mesure du pessimisme, et propose une méthodologie pour estimer les mesures dérivées. Le premier chapitre améliore le choix optimal de portefeuille dans le cadre du principe moyenne-variance de Markowitz (1952). Ceci est motivé par les résultats très décevants obtenus, lorsque la moyenne et la variance sont remplacées par leurs estimations empiriques. Ce problème est amplifié lorsque le nombre d’actifs est grand et que la matrice de covariance empirique est singulière ou presque singulière. Dans ce chapitre, nous examinons quatre techniques de régularisation pour stabiliser l’inverse de la matrice de covariance: le ridge, spectral cut-off, Landweber-Fridman et LARS Lasso. Ces méthodes font chacune intervenir un paramètre d’ajustement, qui doit être sélectionné. La contribution principale de cette partie, est de dériver une méthode basée uniquement sur les données pour sélectionner le paramètre de régularisation de manière optimale, i.e. pour minimiser la perte espérée d’utilité. Précisément, un critère de validation croisée qui prend une même forme pour les quatre méthodes de régularisation est dérivé. Les règles régularisées obtenues sont alors comparées à la règle utilisant directement les données et à la stratégie naïve 1/N, selon leur perte espérée d’utilité et leur ratio de Sharpe. Ces performances sont mesurée dans l’échantillon (in-sample) et hors-échantillon (out-of-sample) en considérant différentes tailles d’échantillon et nombre d’actifs. Des simulations et de l’illustration empirique menées, il ressort principalement que la régularisation de la matrice de covariance améliore de manière significative la règle de Markowitz basée sur les données, et donne de meilleurs résultats que le portefeuille naïf, surtout dans les cas le problème d’erreur d’estimation est très sévère. Dans le second chapitre, nous investiguons dans quelle mesure, les portefeuilles optimaux et stables d'actifs domestiques, peuvent réduire ou éliminer le risque de devise. Pour cela nous utilisons des rendements mensuelles de 48 industries américaines, au cours de la période 1976-2008. Pour résoudre les problèmes d'instabilité inhérents aux portefeuilles de grandes tailles, nous adoptons la méthode de régularisation spectral cut-off. Ceci aboutit à une famille de portefeuilles optimaux et stables, en permettant aux investisseurs de choisir différents pourcentages des composantes principales (ou dégrées de stabilité). Nos tests empiriques sont basés sur un modèle International d'évaluation d'actifs financiers (IAPM). Dans ce modèle, le risque de devise est décomposé en deux facteurs représentant les devises des pays industrialisés d'une part, et celles des pays émergents d'autres part. Nos résultats indiquent que le risque de devise est primé et varie à travers le temps pour les portefeuilles stables de risque minimum. De plus ces stratégies conduisent à une réduction significative de l'exposition au risque de change, tandis que la contribution de la prime risque de change reste en moyenne inchangée. Les poids de portefeuille optimaux sont une alternative aux poids de capitalisation boursière. Par conséquent ce chapitre complète la littérature selon laquelle la prime de risque est importante au niveau de l'industrie et au niveau national dans la plupart des pays. Dans le dernier chapitre, nous dérivons une mesure de la prime de risque pour des préférences dépendent du rang et proposons une mesure du degré de pessimisme, étant donné une fonction de distorsion. Les mesures introduites généralisent la mesure de prime de risque dérivée dans le cadre de la théorie de l'utilité espérée, qui est fréquemment violée aussi bien dans des situations expérimentales que dans des situations réelles. Dans la grande famille des préférences considérées, une attention particulière est accordée à la CVaR (valeur à risque conditionnelle). Cette dernière mesure de risque est de plus en plus utilisée pour la construction de portefeuilles et est préconisée pour compléter la VaR (valeur à risque) utilisée depuis 1996 par le comité de Bâle. De plus, nous fournissons le cadre statistique nécessaire pour faire de l’inférence sur les mesures proposées. Pour finir, les propriétés des estimateurs proposés sont évaluées à travers une étude Monte-Carlo, et une illustration empirique en utilisant les rendements journaliers du marché boursier américain sur de la période 2000-2011.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Electromagnetic tomography has been applied to problems in nondestructive evolution, ground-penetrating radar, synthetic aperture radar, target identification, electrical well logging, medical imaging etc. The problem of electromagnetic tomography involves the estimation of cross sectional distribution dielectric permittivity, conductivity etc based on measurement of the scattered fields. The inverse scattering problem of electromagnetic imaging is highly non linear and ill posed, and is liable to get trapped in local minima. The iterative solution techniques employed for computing the inverse scattering problem of electromagnetic imaging are highly computation intensive. Thus the solution to electromagnetic imaging problem is beset with convergence and computational issues. The attempt of this thesis is to develop methods suitable for improving the convergence and reduce the total computations for tomographic imaging of two dimensional dielectric cylinders illuminated by TM polarized waves, where the scattering problem is defmed using scalar equations. A multi resolution frequency hopping approach was proposed as opposed to the conventional frequency hopping approach employed to image large inhomogeneous scatterers. The strategy was tested on both synthetic and experimental data and gave results that were better localized and also accelerated the iterative procedure employed for the imaging. A Degree of Symmetry formulation was introduced to locate the scatterer in the investigation domain when the scatterer cross section was circular. The investigation domain could thus be reduced which reduced the degrees of freedom of the inverse scattering process. Thus the entire measured scattered data was available for the optimization of fewer numbers of pixels. This resulted in better and more robust reconstructions of the scatterer cross sectional profile. The Degree of Symmetry formulation could also be applied to the practical problem of limited angle tomography, as in the case of a buried pipeline, where the ill posedness is much larger. The formulation was also tested using experimental data generated from an experimental setup that was designed. The experimental results confirmed the practical applicability of the formulation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper we consider the scattering of a plane acoustic or electromagnetic wave by a one-dimensional, periodic rough surface. We restrict the discussion to the case when the boundary is sound soft in the acoustic case, perfectly reflecting with TE polarization in the EM case, so that the total field vanishes on the boundary. We propose a uniquely solvable first kind integral equation formulation of the problem, which amounts to a requirement that the normal derivative of the Green's representation formula for the total field vanish on a horizontal line below the scattering surface. We then discuss the numerical solution by Galerkin's method of this (ill-posed) integral equation. We point out that, with two particular choices of the trial and test spaces, we recover the so-called SC (spectral-coordinate) and SS (spectral-spectral) numerical schemes of DeSanto et al., Waves Random Media, 8, 315-414 1998. We next propose a new Galerkin scheme, a modification of the SS method that we term the SS* method, which is an instance of the well-known dual least squares Galerkin method. We show that the SS* method is always well-defined and is optimally convergent as the size of the approximation space increases. Moreover, we make a connection with the classical least squares method, in which the coefficients in the Rayleigh expansion of the solution are determined by enforcing the boundary condition in a least squares sense, pointing out that the linear system to be solved in the SS* method is identical to that in the least squares method. Using this connection we show that (reflecting the ill-posed nature of the integral equation solved) the condition number of the linear system in the SS* and least squares methods approaches infinity as the approximation space increases in size. We also provide theoretical error bounds on the condition number and on the errors induced in the numerical solution computed as a result of ill-conditioning. Numerical results confirm the convergence of the SS* method and illustrate the ill-conditioning that arises.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

For the very large nonlinear dynamical systems that arise in a wide range of physical, biological and environmental problems, the data needed to initialize a numerical forecasting model are seldom available. To generate accurate estimates of the expected states of the system, both current and future, the technique of ‘data assimilation’ is used to combine the numerical model predictions with observations of the system measured over time. Assimilation of data is an inverse problem that for very large-scale systems is generally ill-posed. In four-dimensional variational assimilation schemes, the dynamical model equations provide constraints that act to spread information into data sparse regions, enabling the state of the system to be reconstructed accurately. The mechanism for this is not well understood. Singular value decomposition techniques are applied here to the observability matrix of the system in order to analyse the critical features in this process. Simplified models are used to demonstrate how information is propagated from observed regions into unobserved areas. The impact of the size of the observational noise and the temporal position of the observations is examined. The best signal-to-noise ratio needed to extract the most information from the observations is estimated using Tikhonov regularization theory. Copyright © 2005 John Wiley & Sons, Ltd.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Inverse problems for dynamical system models of cognitive processes comprise the determination of synaptic weight matrices or kernel functions for neural networks or neural/dynamic field models, respectively. We introduce dynamic cognitive modeling as a three tier top-down approach where cognitive processes are first described as algorithms that operate on complex symbolic data structures. Second, symbolic expressions and operations are represented by states and transformations in abstract vector spaces. Third, prescribed trajectories through representation space are implemented in neurodynamical systems. We discuss the Amari equation for a neural/dynamic field theory as a special case and show that the kernel construction problem is particularly ill-posed. We suggest a Tikhonov-Hebbian learning method as regularization technique and demonstrate its validity and robustness for basic examples of cognitive computations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper a support vector machine (SVM) approach for characterizing the feasible parameter set (FPS) in non-linear set-membership estimation problems is presented. It iteratively solves a regression problem from which an approximation of the boundary of the FPS can be determined. To guarantee convergence to the boundary the procedure includes a no-derivative line search and for an appropriate coverage of points on the FPS boundary it is suggested to start with a sequential box pavement procedure. The SVM approach is illustrated on a simple sine and exponential model with two parameters and an agro-forestry simulation model.