987 resultados para III-posed inverse problem


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The authors would like to express their gratitude to organizations and people that supported this research. Piotr Omenzetter’s work within the Lloyd’s Register Foundation Centre for Safety and Reliability Engineering at the University of Aberdeen is supported by Lloyd’s Register Foundation. The Foundation helps to protect life and property by supporting engineering-related education, public engagement and the application of research. Ben Ryder of Aurecon and Graeme Cummings of HEB Construction assisted in obtaining access to the bridge and information for modelling. Luke Williams and Graham Bougen, undergraduate research students, assisted with testing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others.

This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system.

Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity.

Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O presente trabalho descreve um estudo sobre a metodologia matemática para a solução do problema direto e inverso na Tomografia por Impedância Elétrica. Este estudo foi motivado pela necessidade de compreender o problema inverso e sua utilidade na formação de imagens por Tomografia por Impedância Elétrica. O entendimento deste estudo possibilitou constatar, através de equações e programas, a identificação das estruturas internas que constituem um corpo. Para isto, primeiramente, é preciso conhecer os potencias elétricos adquiridos nas fronteiras do corpo. Estes potenciais são adquiridos pela aplicação de uma corrente elétrica e resolvidos matematicamente pelo problema direto através da equação de Laplace. O Método dos Elementos Finitos em conjunção com as equações oriundas do eletromagnetismo é utilizado para resolver o problema direto. O software EIDORS, contudo, através dos conceitos de problema direto e inverso, reconstrói imagens de Tomografia por Impedância Elétrica que possibilitam visualizar e comparar diferentes métodos de resolução do problema inverso para reconstrução de estruturas internas. Os métodos de Tikhonov, Noser, Laplace, Hiperparamétrico e Variação Total foram utilizados para obter uma solução aproximada (regularizada) para o problema de identificação. Na Tomografia por Impedância Elétrica, com as condições de contorno preestabelecidas de corrente elétricas e regiões definidas, o método hiperparamétrico apresentou uma solução aproximada mais adequada para reconstrução da imagem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The central motif of this work is prediction and optimization in presence of multiple interacting intelligent agents. We use the phrase `intelligent agents' to imply in some sense, a `bounded rationality', the exact meaning of which varies depending on the setting. Our agents may not be `rational' in the classical game theoretic sense, in that they don't always optimize a global objective. Rather, they rely on heuristics, as is natural for human agents or even software agents operating in the real-world. Within this broad framework we study the problem of influence maximization in social networks where behavior of agents is myopic, but complication stems from the structure of interaction networks. In this setting, we generalize two well-known models and give new algorithms and hardness results for our models. Then we move on to models where the agents reason strategically but are faced with considerable uncertainty. For such games, we give a new solution concept and analyze a real-world game using out techniques. Finally, the richest model we consider is that of Network Cournot Competition which deals with strategic resource allocation in hypergraphs, where agents reason strategically and their interaction is specified indirectly via player's utility functions. For this model, we give the first equilibrium computability results. In all of the above problems, we assume that payoffs for the agents are known. However, for real-world games, getting the payoffs can be quite challenging. To this end, we also study the inverse problem of inferring payoffs, given game history. We propose and evaluate a data analytic framework and we show that it is fast and performant.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite the wide swath of applications where multiphase fluid contact lines exist, there is still no consensus on an accurate and general simulation methodology. Most prior numerical work has imposed one of the many dynamic contact-angle theories at solid walls. Such approaches are inherently limited by the theory accuracy. In fact, when inertial effects are important, the contact angle may be history dependent and, thus, any single mathematical function is inappropriate. Given these limitations, the present work has two primary goals: 1) create a numerical framework that allows the contact angle to evolve naturally with appropriate contact-line physics and 2) develop equations and numerical methods such that contact-line simulations may be performed on coarse computational meshes.

Fluid flows affected by contact lines are dominated by capillary stresses and require accurate curvature calculations. The level set method was chosen to track the fluid interfaces because it is easy to calculate interface curvature accurately. Unfortunately, the level set reinitialization suffers from an ill-posed mathematical problem at contact lines: a ``blind spot'' exists. Standard techniques to handle this deficiency are shown to introduce parasitic velocity currents that artificially deform freely floating (non-prescribed) contact angles. As an alternative, a new relaxation equation reinitialization is proposed to remove these spurious velocity currents and its concept is further explored with level-set extension velocities.

To capture contact-line physics, two classical boundary conditions, the Navier-slip velocity boundary condition and a fixed contact angle, are implemented in direct numerical simulations (DNS). DNS are found to converge only if the slip length is well resolved by the computational mesh. Unfortunately, since the slip length is often very small compared to fluid structures, these simulations are not computationally feasible for large systems. To address the second goal, a new methodology is proposed which relies on the volumetric-filtered Navier-Stokes equations. Two unclosed terms, an average curvature and a viscous shear VS, are proposed to represent the missing microscale physics on a coarse mesh.

All of these components are then combined into a single framework and tested for a water droplet impacting a partially-wetting substrate. Very good agreement is found for the evolution of the contact diameter in time between the experimental measurements and the numerical simulation. Such comparison would not be possible with prior methods, since the Reynolds number Re and capillary number Ca are large. Furthermore, the experimentally approximated slip length ratio is well outside of the range currently achievable by DNS. This framework is a promising first step towards simulating complex physics in capillary-dominated flows at a reasonable computational expense.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Photothermal imaging allows to inspect the structure of composite materials by means of nondestructive tests. The surface of a medium is heated at a number of locations. The resulting temperature field is recorded on the same surface. Thermal waves are strongly damped. Robust schemes are needed to reconstruct the structure of the medium from the decaying time dependent temperature field. The inverse problem is formulated as a weighted optimization problem with a time dependent constraint. The inclusions buried in the medium and their material constants are the design variables. We propose an approximation scheme in two steps. First, Laplace transforms are used to generate an approximate optimization problem with a small number of stationary constraints. Then, we implement a descent strategy alternating topological derivative techniques to reconstruct the geometry of inclusions with gradient methods to identify their material parameters. Numerical simulations assess the effectivity of the technique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Injectivity decline, which can be caused by particle retention, generally occurs during water injection or reinjection in oil fields. Several mechanisms, including straining, are responsible for particle retention and pore blocking causing formation damage and injectivity decline. Predicting formation damage and injectivity decline is essential in waterflooding projects. The Classic Model (CM), which incorporates filtration coefficients and formation damage functions, has been widely used to predict injectivity decline. However, various authors have reported significant discrepancies between Classical Model and experimental results, motivating the development of deep bed filtration models considering multiple particle retention mechanisms (Santos & Barros, 2010; SBM). In this dissertation, inverse problem solution was studied and a software for experimental data treatment was developed. Finally, experimental data were fitted using both the CM and SBM. The results showed that, depending on the formation damage function, the predictions for injectivity decline using CM and SBM models can be significantly different

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Three dimensional (3D) printers of continuous fiber reinforced composites, such as MarkTwo (MT) by Markforged, can be used to manufacture such structures. To date, research works devoted to the study and application of flexible elements and CMs realized with MT printer are only a few and very recent. A good numerical and/or analytical tool for the mechanical behavior analysis of the new composites is still missing. In addition, there is still a gap in obtaining the material properties used (e.g. elastic modulus) as it is usually unknown and sensitive to printing parameters used (e.g. infill density), making the numerical simulation inaccurate. Consequently, the aim of this thesis is to present several work developed. The first is a preliminary investigation on the tensile and flexural response of Straight Beam Flexures (SBF) realized with MT printer and featuring different interlayer fiber volume-fraction and orientation, as well as different laminate position within the sample. The second is to develop a numerical analysis within the Carrera' s Unified Formulation (CUF) framework, based on component-wise (CW) approach, including a novel preprocessing tool that has been developed to account all regions printed in an easy and time efficient way. Among its benefits, the CUF-CW approach enables building an accurate database for collecting first natural frequencies modes results, then predicting Young' s modulus based on an inverse problem formulation. To validate the tool, the numerical results are compared to the experimental natural frequencies evaluated using a digital image correlation method. Further, we take the CUF-CW model and use static condensation to analyze smart structures which can be decomposed into a large number of similar components. Third, the potentiality of MT in combination with topology optimization and compliant joints design (CJD) is investigated for the realization of automated machinery mechanisms subjected to inertial loads.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This paper deals with the use of the conjugate gradient method of function estimation for the simultaneous identification of two unknown boundary heat fluxes in parallel plate channels. The fluid flow is assumed to be laminar and hydrodynamically developed. Temperature measurements taken inside the channel are used in the inverse analysis. The accuracy of the present solution approach is examined by using simulated measurements containing random errors, for strict cases involving functional forms with discontinuities and sharp-corners for the unknown functions. Three different types of inverse problems are addressed in the paper, involving the estimation of: (i) Spatially dependent heat fluxes; (ii) Time-dependent heat fluxes; and (iii) Time and spatially dependent heat fluxes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Second-rank tensor interactions, such as quadrupolar interactions between the spin- 1 deuterium nuclei and the electric field gradients created by chemical bonds, are affected by rapid random molecular motions that modulate the orientation of the molecule with respect to the external magnetic field. In biological and model membrane systems, where a distribution of dynamically averaged anisotropies (quadrupolar splittings, chemical shift anisotropies, etc.) is present and where, in addition, various parts of the sample may undergo a partial magnetic alignment, the numerical analysis of the resulting Nuclear Magnetic Resonance (NMR) spectra is a mathematically ill-posed problem. However, numerical methods (de-Pakeing, Tikhonov regularization) exist that allow for a simultaneous determination of both the anisotropy and orientational distributions. An additional complication arises when relaxation is taken into account. This work presents a method of obtaining the orientation dependence of the relaxation rates that can be used for the analysis of the molecular motions on a broad range of time scales. An arbitrary set of exponential decay rates is described by a three-term truncated Legendre polynomial expansion in the orientation dependence, as appropriate for a second-rank tensor interaction, and a linear approximation to the individual decay rates is made. Thus a severe numerical instability caused by the presence of noise in the experimental data is avoided. At the same time, enough flexibility in the inversion algorithm is retained to achieve a meaningful mapping from raw experimental data to a set of intermediate, model-free

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Electromagnetic tomography has been applied to problems in nondestructive evolution, ground-penetrating radar, synthetic aperture radar, target identification, electrical well logging, medical imaging etc. The problem of electromagnetic tomography involves the estimation of cross sectional distribution dielectric permittivity, conductivity etc based on measurement of the scattered fields. The inverse scattering problem of electromagnetic imaging is highly non linear and ill posed, and is liable to get trapped in local minima. The iterative solution techniques employed for computing the inverse scattering problem of electromagnetic imaging are highly computation intensive. Thus the solution to electromagnetic imaging problem is beset with convergence and computational issues. The attempt of this thesis is to develop methods suitable for improving the convergence and reduce the total computations for tomographic imaging of two dimensional dielectric cylinders illuminated by TM polarized waves, where the scattering problem is defmed using scalar equations. A multi resolution frequency hopping approach was proposed as opposed to the conventional frequency hopping approach employed to image large inhomogeneous scatterers. The strategy was tested on both synthetic and experimental data and gave results that were better localized and also accelerated the iterative procedure employed for the imaging. A Degree of Symmetry formulation was introduced to locate the scatterer in the investigation domain when the scatterer cross section was circular. The investigation domain could thus be reduced which reduced the degrees of freedom of the inverse scattering process. Thus the entire measured scattered data was available for the optimization of fewer numbers of pixels. This resulted in better and more robust reconstructions of the scatterer cross sectional profile. The Degree of Symmetry formulation could also be applied to the practical problem of limited angle tomography, as in the case of a buried pipeline, where the ill posedness is much larger. The formulation was also tested using experimental data generated from an experimental setup that was designed. The experimental results confirmed the practical applicability of the formulation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We consider the problem of scattering of time-harmonic acoustic waves by an unbounded sound-soft rough surface. Recently, a Brakhage Werner type integral equation formulation of this problem has been proposed, based on an ansatz as a combined single- and double-layer potential, but replacing the usual fundamental solution of the Helmholtz equation with an appropriate half-space Green's function. Moreover, it has been shown in the three-dimensional case that this integral equation is uniquely solvable in the space L-2 (Gamma) when the scattering surface G does not differ too much from a plane. In this paper, we show that this integral equation is uniquely solvable with no restriction on the surface elevation or slope. Moreover, we construct explicit bounds on the inverse of the associated boundary integral operator, as a function of the wave number, the parameter coupling the single- and double-layer potentials, and the maximum surface slope. These bounds show that the norm of the inverse operator is bounded uniformly in the wave number, kappa, for kappa > 0, if the coupling parameter h is chosen proportional to the wave number. In the case when G is a plane, we show that the choice eta = kappa/2 is nearly optimal in terms of minimizing the condition number.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this article, we use the no-response test idea, introduced in Luke and Potthast (2003) and Potthast (Preprint) and the inverse obstacle problem, to identify the interface of the discontinuity of the coefficient gamma of the equation del (.) gamma(x)del + c(x) with piecewise regular gamma and bounded function c(x). We use infinitely many Cauchy data as measurement and give a reconstructive method to localize the interface. We will base this multiwave version of the no-response test on two different proofs. The first one contains a pointwise estimate as used by the singular sources method. The second one is built on an energy (or an integral) estimate which is the basis of the probe method. As a conclusion of this, the probe and the singular sources methods are equivalent regarding their convergence and the no-response test can be seen as a unified framework for these methods. As a further contribution, we provide a formula to reconstruct the values of the jump of gamma(x), x is an element of partial derivative D at the boundary. A second consequence of this formula is that the blow-up rate of the indicator functions of the probe and singular sources methods at the interface is given by the order of the singularity of the fundamental solution.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Inverse problems for dynamical system models of cognitive processes comprise the determination of synaptic weight matrices or kernel functions for neural networks or neural/dynamic field models, respectively. We introduce dynamic cognitive modeling as a three tier top-down approach where cognitive processes are first described as algorithms that operate on complex symbolic data structures. Second, symbolic expressions and operations are represented by states and transformations in abstract vector spaces. Third, prescribed trajectories through representation space are implemented in neurodynamical systems. We discuss the Amari equation for a neural/dynamic field theory as a special case and show that the kernel construction problem is particularly ill-posed. We suggest a Tikhonov-Hebbian learning method as regularization technique and demonstrate its validity and robustness for basic examples of cognitive computations.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper we explore classification techniques for ill-posed problems. Two classes are linearly separable in some Hilbert space X if they can be separated by a hyperplane. We investigate stable separability, i.e. the case where we have a positive distance between two separating hyperplanes. When the data in the space Y is generated by a compact operator A applied to the system states ∈ X, we will show that in general we do not obtain stable separability in Y even if the problem in X is stably separable. In particular, we show this for the case where a nonlinear classification is generated from a non-convergent family of linear classes in X. We apply our results to the problem of quality control of fuel cells where we classify fuel cells according to their efficiency. We can potentially classify a fuel cell using either some external measured magnetic field or some internal current. However we cannot measure the current directly since we cannot access the fuel cell in operation. The first possibility is to apply discrimination techniques directly to the measured magnetic fields. The second approach first reconstructs currents and then carries out the classification on the current distributions. We show that both approaches need regularization and that the regularized classifications are not equivalent in general. Finally, we investigate a widely used linear classification algorithm Fisher's linear discriminant with respect to its ill-posedness when applied to data generated via a compact integral operator. We show that the method cannot stay stable when the number of measurement points becomes large.