987 resultados para III-posed inverse problem
Resumo:
The determination of the displacement and the space-dependent force acting on a vibrating structure from measured final or time-average displacement observation is thoroughly investigated. Several aspects related to the existence and uniqueness of a solution of the linear but ill-posed inverse force problems are highlighted. After that, in order to capture the solution a variational formulation is proposed and the gradient of the least-squares functional that is minimized is rigorously and explicitly derived. Numerical results obtained using the Landweber method and the conjugate gradient method are presented and discussed illustrating the convergence of the iterative procedures for exact input data. Furthermore, for noisy data the semi-convergence phenomenon appears, as expected, and stability is restored by stopping the iterations according to the discrepancy principle criterion once the residual becomes close to the amount of noise. The present investigation will be significant to researchers concerned with wave propagation and control of vibrating structures.
Resumo:
Dynamics of biomolecules over various spatial and time scales are essential for biological functions such as molecular recognition, catalysis and signaling. However, reconstruction of biomolecular dynamics from experimental observables requires the determination of a conformational probability distribution. Unfortunately, these distributions cannot be fully constrained by the limited information from experiments, making the problem an ill-posed one in the terminology of Hadamard. The ill-posed nature of the problem comes from the fact that it has no unique solution. Multiple or even an infinite number of solutions may exist. To avoid the ill-posed nature, the problem needs to be regularized by making assumptions, which inevitably introduce biases into the result.
Here, I present two continuous probability density function approaches to solve an important inverse problem called the RDC trigonometric moment problem. By focusing on interdomain orientations we reduced the problem to determination of a distribution on the 3D rotational space from residual dipolar couplings (RDCs). We derived an analytical equation that relates alignment tensors of adjacent domains, which serves as the foundation of the two methods. In the first approach, the ill-posed nature of the problem was avoided by introducing a continuous distribution model, which enjoys a smoothness assumption. To find the optimal solution for the distribution, we also designed an efficient branch-and-bound algorithm that exploits the mathematical structure of the analytical solutions. The algorithm is guaranteed to find the distribution that best satisfies the analytical relationship. We observed good performance of the method when tested under various levels of experimental noise and when applied to two protein systems. The second approach avoids the use of any model by employing maximum entropy principles. This 'model-free' approach delivers the least biased result which presents our state of knowledge. In this approach, the solution is an exponential function of Lagrange multipliers. To determine the multipliers, a convex objective function is constructed. Consequently, the maximum entropy solution can be found easily by gradient descent methods. Both algorithms can be applied to biomolecular RDC data in general, including data from RNA and DNA molecules.
Resumo:
Development of reliable methods for optimised energy storage and generation is one of the most imminent challenges in modern power systems. In this paper an adaptive approach to load leveling problem using novel dynamic models based on the Volterra integral equations of the first kind with piecewise continuous kernels. These integral equations efficiently solve such inverse problem taking into account both the time dependent efficiencies and the availability of generation/storage of each energy storage technology. In this analysis a direct numerical method is employed to find the least-cost dispatch of available storages. The proposed collocation type numerical method has second order accuracy and enjoys self-regularization properties, which is associated with confidence levels of system demand. This adaptive approach is suitable for energy storage optimisation in real time. The efficiency of the proposed methodology is demonstrated on the Single Electricity Market of Republic of Ireland and Northern Ireland.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Accurate estimation of road pavement geometry and layer material properties through the use of proper nondestructive testing and sensor technologies is essential for evaluating pavement’s structural condition and determining options for maintenance and rehabilitation. For these purposes, pavement deflection basins produced by the nondestructive Falling Weight Deflectometer (FWD) test data are commonly used. The nondestructive FWD test drops weights on the pavement to simulate traffic loads and measures the created pavement deflection basins. Backcalculation of pavement geometry and layer properties using FWD deflections is a difficult inverse problem, and the solution with conventional mathematical methods is often challenging due to the ill-posed nature of the problem. In this dissertation, a hybrid algorithm was developed to seek robust and fast solutions to this inverse problem. The algorithm is based on soft computing techniques, mainly Artificial Neural Networks (ANNs) and Genetic Algorithms (GAs) as well as the use of numerical analysis techniques to properly simulate the geomechanical system. A widely used pavement layered analysis program ILLI-PAVE was employed in the analyses of flexible pavements of various pavement types; including full-depth asphalt and conventional flexible pavements, were built on either lime stabilized soils or untreated subgrade. Nonlinear properties of the subgrade soil and the base course aggregate as transportation geomaterials were also considered. A computer program, Soft Computing Based System Identifier or SOFTSYS, was developed. In SOFTSYS, ANNs were used as surrogate models to provide faster solutions of the nonlinear finite element program ILLI-PAVE. The deflections obtained from FWD tests in the field were matched with the predictions obtained from the numerical simulations to develop SOFTSYS models. The solution to the inverse problem for multi-layered pavements is computationally hard to achieve and is often not feasible due to field variability and quality of the collected data. The primary difficulty in the analysis arises from the substantial increase in the degree of non-uniqueness of the mapping from the pavement layer parameters to the FWD deflections. The insensitivity of some layer properties lowered SOFTSYS model performances. Still, SOFTSYS models were shown to work effectively with the synthetic data obtained from ILLI-PAVE finite element solutions. In general, SOFTSYS solutions very closely matched the ILLI-PAVE mechanistic pavement analysis results. For SOFTSYS validation, field collected FWD data were successfully used to predict pavement layer thicknesses and layer moduli of in-service flexible pavements. Some of the very promising SOFTSYS results indicated average absolute errors on the order of 2%, 7%, and 4% for the Hot Mix Asphalt (HMA) thickness estimation of full-depth asphalt pavements, full-depth pavements on lime stabilized soils and conventional flexible pavements, respectively. The field validations of SOFTSYS data also produced meaningful results. The thickness data obtained from Ground Penetrating Radar testing matched reasonably well with predictions from SOFTSYS models. The differences observed in the HMA and lime stabilized soil layer thicknesses observed were attributed to deflection data variability from FWD tests. The backcalculated asphalt concrete layer thickness results matched better in the case of full-depth asphalt flexible pavements built on lime stabilized soils compared to conventional flexible pavements. Overall, SOFTSYS was capable of producing reliable thickness estimates despite the variability of field constructed asphalt layer thicknesses.
Resumo:
Scientific curiosity, exploration of georesources and environmental concerns are pushing the geoscientific research community toward subsurface investigations of ever-increasing complexity. This review explores various approaches to formulate and solve inverse problems in ways that effectively integrate geological concepts with geophysical and hydrogeological data. Modern geostatistical simulation algorithms can produce multiple subsurface realizations that are in agreement with conceptual geological models and statistical rock physics can be used to map these realizations into physical properties that are sensed by the geophysical or hydrogeological data. The inverse problem consists of finding one or an ensemble of such subsurface realizations that are in agreement with the data. The most general inversion frameworks are presently often computationally intractable when applied to large-scale problems and it is necessary to better understand the implications of simplifying (1) the conceptual geological model (e.g., using model compression); (2) the physical forward problem (e.g., using proxy models); and (3) the algorithm used to solve the inverse problem (e.g., Markov chain Monte Carlo or local optimization methods) to reach practical and robust solutions given today's computer resources and knowledge. We also highlight the need to not only use geophysical and hydrogeological data for parameter estimation purposes, but also to use them to falsify or corroborate alternative geological scenarios.
Regularization meets GreenAI: a new framework for image reconstruction in life sciences applications
Resumo:
Ill-conditioned inverse problems frequently arise in life sciences, particularly in the context of image deblurring and medical image reconstruction. These problems have been addressed through iterative variational algorithms, which regularize the reconstruction by adding prior knowledge about the problem's solution. Despite the theoretical reliability of these methods, their practical utility is constrained by the time required to converge. Recently, the advent of neural networks allowed the development of reconstruction algorithms that can compute highly accurate solutions with minimal time demands. Regrettably, it is well-known that neural networks are sensitive to unexpected noise, and the quality of their reconstructions quickly deteriorates when the input is slightly perturbed. Modern efforts to address this challenge have led to the creation of massive neural network architectures, but this approach is unsustainable from both ecological and economic standpoints. The recently introduced GreenAI paradigm argues that developing sustainable neural network models is essential for practical applications. In this thesis, we aim to bridge the gap between theory and practice by introducing a novel framework that combines the reliability of model-based iterative algorithms with the speed and accuracy of end-to-end neural networks. Additionally, we demonstrate that our framework yields results comparable to state-of-the-art methods while using relatively small, sustainable models. In the first part of this thesis, we discuss the proposed framework from a theoretical perspective. We provide an extension of classical regularization theory, applicable in scenarios where neural networks are employed to solve inverse problems, and we show there exists a trade-off between accuracy and stability. Furthermore, we demonstrate the effectiveness of our methods in common life science-related scenarios. In the second part of the thesis, we initiate an exploration extending the proposed method into the probabilistic domain. We analyze some properties of deep generative models, revealing their potential applicability in addressing ill-posed inverse problems.
Resumo:
We show that integrability of the BCS model extends beyond Richardson's model (where all Cooper pair scatterings have equal coupling) to that of the Russian doll BCS model for which the couplings have a particular phase dependence that breaks time-reversal symmetry. This model is shown to be integrable using the quantum inverse scattering method, and the exact solution is obtained by means of the algebraic Bethe ansatz. The inverse problem of expressing local operators in terms of the global operators of the monodromy matrix is solved. This result is used to find a determinant formulation of a correlation function for fluctuations in the Cooper pair occupation numbers. These results are used to undertake exact numerical analysis for small systems at half-filling.
Resumo:
Superconducting pairing of electrons in nanoscale metallic particles with discrete energy levels and a fixed number of electrons is described by the reduced Bardeen, Cooper, and Schrieffer model Hamiltonian. We show that this model is integrable by the algebraic Bethe ansatz. The eigenstates, spectrum, conserved operators, integrals of motion, and norms of wave functions are obtained. Furthermore, the quantum inverse problem is solved, meaning that form factors and correlation functions can be explicitly evaluated. Closed form expressions are given for the form factors and correlation functions that describe superconducting pairing.
Resumo:
Electrical impedance tomography is a technique to estimate the impedance distribution within a domain, based on measurements on its boundary. In other words, given the mathematical model of the domain, its geometry and boundary conditions, a nonlinear inverse problem of estimating the electric impedance distribution can be solved. Several impedance estimation algorithms have been proposed to solve this problem. In this paper, we present a three-dimensional algorithm, based on the topology optimization method, as an alternative. A sequence of linear programming problems, allowing for constraints, is solved utilizing this method. In each iteration, the finite element method provides the electric potential field within the model of the domain. An electrode model is also proposed (thus, increasing the accuracy of the finite element results). The algorithm is tested using numerically simulated data and also experimental data, and absolute resistivity values are obtained. These results, corresponding to phantoms with two different conductive materials, exhibit relatively well-defined boundaries between them, and show that this is a practical and potentially useful technique to be applied to monitor lung aeration, including the possibility of imaging a pneumothorax.
Resumo:
This work provides analytical and numerical solutions for the linear, quadratic and exponential Phan–Thien–Tanner (PTT) viscoelastic models, for axial and helical annular fully-developed flows under no slip and slip boundary conditions, the latter given by the linear and nonlinear Navier slip laws. The rheology of the three PTT model functions is discussed together with the influence of the slip velocity upon the flow velocity and stress fields. For the linear PTT model, full analytical solutions for the inverse problem (unknown velocity) are devised for the linear Navier slip law and two different slip exponents. For the linear PTT model with other values of the slip exponent and for the quadratic PTT model, the polynomial equation for the radial location (β) of the null shear stress must be solved numerically. For both models, the solution of the direct problem is given by an iterative procedure involving three nonlinear equations, one for β, other for the pressure gradient and another for the torque per unit length. For the exponential PTT model we devise a numerical procedure that can easily compute the numerical solution of the pure axial flow problem
Resumo:
A dengue é hoje uma das doenças com maior incidência no Brasil, com especial frequência no município de Fênix –Paraná. No presente estudo pretendeu-se analisar as concepções que as crianças têm sobre a dengue, identificando os componentes do modelo KVP, bem como conhecer as representações sociais deste grupo. Para o efeito optou-se pela utilização de charges com alunos do 5º ano do ensino fundamental de uma escola localizada no município de Fênix, cujos textos foram analisados para identificação de categorias e dos componentes do modelo KVP (conhecimentos, valores e práticas) a elas associadas. Foram identificadas quatro categorias de respostas sobre a interpretação da charge relativa à dengue: (i) prevenção da dengue, (ii) perigoso que pode levar à morte, (iii) problema de saúde pública e (iv) combater a dengue. Verificou-se que a “prevenção da dengue” foi a categoria em que se identificaram os três domínios K, V e P implicados na construção das concepções, enquanto as duas categorias “perigoso que pode levar à morte” e “problema de saúde pública” apresentaram apenas os domínios K e V, e a categoria “combater a dengue” apenas evidenciou o domínio V. Os resultados do estudo mostraram que os alunos já veem a dengue como um problema com consequências sérias e que todos têm sua responsabilidade no controle da doença. Percebe-se, portanto, que todo o trabalho que vem sendo realizado pela secretaria de saúde, pelas escolas ou campanhas publicitárias esta surtindo efeito, uma vez que no ano de 2014 houve uma redução do número de casos no município foco da pesquisa.
Resumo:
სტატიაში დამტკიცებულია პოტენციალთა თეორიის შებრუნებული ამოცანის ამონახსნის ერთადერთობა, როდესაც გარე საზღვარზე ∂Ω∞ (Ω= Ω1 UΩ2) არსებობს ∂Ω1 და ∂Ω2 წირების გადაკვეთის წერტილი.
Resumo:
სტატიაში დამტკიცებულია წრიული მრავალკუთხედებისათვის შებრუნებული ამოცანის ამოხსნის ერთადერთობა ორი შემთხვევისათვის: პირველი მუდმივი სიმკვრივისა და მეორე დადებითი სიმკვრივისათვის, რომელიც არ იცვლება მიმართულების მიხედვით.
Resumo:
Heat transfer, inverse problem, spray cooling