957 resultados para Nuclear engineering inverse problems
Resumo:
We prove a uniqueness result related to the Germain–Lagrange dynamic plate differential equation. We consider the equation {∂2u∂t2+△2u=g⊗f,in ]0,+∞)×R2,u(0)=0,∂u∂t(0)=0, where uu stands for the transverse displacement, ff is a distribution compactly supported in space, and g∈Lloc1([0,+∞)) is a function of time such that g(0)≠0g(0)≠0 and there is a T0>0T0>0 such that g∈C1[0,T0[g∈C1[0,T0[. We prove that the knowledge of uu over an arbitrary open set of the plate for any interval of time ]0,T[]0,T[, 0
Resumo:
In der vorliegenden Arbeit wird die Faktorisierungsmethode zur Erkennung von Inhomogenitäten der Leitfähigkeit in der elektrischen Impedanztomographie auf unbeschränkten Gebieten - speziell der Halbebene bzw. dem Halbraum - untersucht. Als Lösungsräume für das direkte Problem, d.h. die Bestimmung des elektrischen Potentials zu vorgegebener Leitfähigkeit und zu vorgegebenem Randstrom, führen wir gewichtete Sobolev-Räume ein. In diesen wird die Existenz von schwachen Lösungen des direkten Problems gezeigt und die Gültigkeit einer Integraldarstellung für die Lösung der Laplace-Gleichung, die man bei homogener Leitfähigkeit erhält, bewiesen. Mittels der Faktorisierungsmethode geben wir eine explizite Charakterisierung von Einschlüssen an, die gegenüber dem Hintergrund eine sprunghaft erhöhte oder erniedrigte Leitfähigkeit haben. Damit ist zugleich für diese Klasse von Leitfähigkeiten die eindeutige Rekonstruierbarkeit der Einschlüsse bei Kenntnis der lokalen Neumann-Dirichlet-Abbildung gezeigt. Die mittels der Faktorisierungsmethode erhaltene Charakterisierung der Einschlüsse haben wir in ein numerisches Verfahren umgesetzt und sowohl im zwei- als auch im dreidimensionalen Fall mit simulierten, teilweise gestörten Daten getestet. Im Gegensatz zu anderen bekannten Rekonstruktionsverfahren benötigt das hier vorgestellte keine Vorabinformation über Anzahl und Form der Einschlüsse und hat als nicht-iteratives Verfahren einen vergleichsweise geringen Rechenaufwand.
Resumo:
In der vorliegenden Arbeit wird die Faktorisierungsmethode zur Erkennung von Gebieten mit sprunghaft abweichenden Materialparametern untersucht. Durch eine abstrakte Formulierung beweisen wir die der Methode zugrunde liegende Bildraumidentität für allgemeine reelle elliptische Probleme und deduzieren bereits bekannte und neue Anwendungen der Methode. Für das spezielle Problem, magnetische oder perfekt elektrisch leitende Objekte durch niederfrequente elektromagnetische Strahlung zu lokalisieren, zeigen wir die eindeutige Lösbarkeit des direkten Problems für hinreichend kleine Frequenzen und die Konvergenz der Lösungen gegen die der elliptischen Gleichungen der Magnetostatik. Durch Anwendung unseres allgemeinen Resultats erhalten wir die eindeutige Rekonstruierbarkeit der gesuchten Objekte aus elektromagnetischen Messungen und einen numerischen Algorithmus zur Lokalisierung der Objekte. An einem Musterproblem untersuchen wir, wie durch parabolische Differentialgleichungen beschriebene Einschlüsse in einem durch elliptische Differentialgleichungen beschriebenen Gebiet rekonstruiert werden können. Dabei beweisen wir die eindeutige Lösbarkeit des zugrunde liegenden parabolisch-elliptischen direkten Problems und erhalten durch eine Erweiterung der Faktorisierungsmethode die eindeutige Rekonstruierbarkeit der Einschlüsse sowie einen numerischen Algorithmus zur praktischen Umsetzung der Methode.
Resumo:
In this work, we consider a simple model problem for the electromagnetic exploration of small perfectly conducting objects buried within the lower halfspace of an unbounded two–layered background medium. In possible applications, such as, e.g., humanitarian demining, the two layers would correspond to air and soil. Moving a set of electric devices parallel to the surface of ground to generate a time–harmonic field, the induced field is measured within the same devices. The goal is to retrieve information about buried scatterers from these data. In mathematical terms, we are concerned with the analysis and numerical solution of the inverse scattering problem to reconstruct the number and the positions of a collection of finitely many small perfectly conducting scatterers buried within the lower halfspace of an unbounded two–layered background medium from near field measurements of time–harmonic electromagnetic waves. For this purpose, we first study the corresponding direct scattering problem in detail and derive an asymptotic expansion of the scattered field as the size of the scatterers tends to zero. Then, we use this expansion to justify a noniterative MUSIC–type reconstruction method for the solution of the inverse scattering problem. We propose a numerical implementation of this reconstruction method and provide a series of numerical experiments.
Resumo:
In this work we study localized electric potentials that have an arbitrarily high energy on some given subset of a domain and low energy on another. We show that such potentials exist for general L-infinity-conductivities (with positive infima) in almost arbitrarily shaped subregions of a domain, as long as these regions are connected to the boundary and a unique continuation principle is satisfied. From this we deduce a simple, but new, theoretical identifiability result for the famous Calderon problem with partial data. We also show how to construct such potentials numerically and use a connection with the factorization method to derive a new non-iterative algorithm for the detection of inclusions in electrical impedance tomography.
Resumo:
For the detection of hidden objects by low-frequency electromagnetic imaging the Linear Sampling Method works remarkably well despite the fact that the rigorous mathematical justification is still incomplete. In this work, we give an explanation for this good performance by showing that in the low-frequency limit the measurement operator fulfills the assumptions for the fully justified variant of the Linear Sampling Method, the so-called Factorization Method. We also show how the method has to be modified in the physically relevant case of electromagnetic imaging with divergence-free currents. We present numerical results to illustrate our findings, and to show that similar performance can be expected for the case of conducting objects and layered backgrounds.
Resumo:
In electrical impedance tomography, one tries to recover the conductivity inside a physical body from boundary measurements of current and voltage. In many practically important situations, the investigated object has known background conductivity but it is contaminated by inhomogeneities. The factorization method of Andreas Kirsch provides a tool for locating such inclusions. Earlier, it has been shown that under suitable regularity conditions positive (or negative) inhomogeneities can be characterized by the factorization technique if the conductivity or one of its higher normal derivatives jumps on the boundaries of the inclusions. In this work, we use a monotonicity argument to generalize these results: We show that the factorization method provides a characterization of an open inclusion (modulo its boundary) if each point inside the inhomogeneity has an open neighbourhood where the perturbation of the conductivity is strictly positive (or negative) definite. In particular, we do not assume any regularity of the inclusion boundary or set any conditions on the behaviour of the perturbed conductivity at the inclusion boundary. Our theoretical findings are verified by two-dimensional numerical experiments.
Resumo:
Assuming that the heat capacity of a body is negligible outside certain inclusions the heat equation degenerates to a parabolic-elliptic interface problem. In this work we aim to detect these interfaces from thermal measurements on the surface of the body. We deduce an equivalent variational formulation for the parabolic-elliptic problem and give a new proof of the unique solvability based on Lions’s projection lemma. For the case that the heat conductivity is higher inside the inclusions, we develop an adaptation of the factorization method to this time-dependent problem. In particular this shows that the locations of the interfaces are uniquely determined by boundary measurements. The method also yields to a numerical algorithm to recover the inclusions and thus the interfaces. We demonstrate how measurement data can be simulated numerically by a coupling of a finite element method with a boundary element method, and finally we present some numerical results for the inverse problem.
Resumo:
We consider a simple (but fully three-dimensional) mathematical model for the electromagnetic exploration of buried, perfect electrically conducting objects within the soil underground. Moving an electric device parallel to the ground at constant height in order to generate a magnetic field, we measure the induced magnetic field within the device, and factor the underlying mathematics into a product of three operations which correspond to the primary excitation, some kind of reflection on the surface of the buried object(s) and the corresponding secondary excitation, respectively. Using this factorization we are able to give a justification of the so-called sampling method from inverse scattering theory for this particular set-up.
Resumo:
Coarse graining is a popular technique used in physics to speed up the computer simulation of molecular fluids. An essential part of this technique is a method that solves the inverse problem of determining the interaction potential or its parameters from the given structural data. Due to discrepancies between model and reality, the potential is not unique, such that stability of such method and its convergence to a meaningful solution are issues.rnrnIn this work, we investigate empirically whether coarse graining can be improved by applying the theory of inverse problems from applied mathematics. In particular, we use the singular value analysis to reveal the weak interaction parameters, that have a negligible influence on the structure of the fluid and which cause non-uniqueness of the solution. Further, we apply a regularizing Levenberg-Marquardt method, which is stable against the mentioned discrepancies. Then, we compare it to the existing physical methods - the Iterative Boltzmann Inversion and the Inverse Monte Carlo method, which are fast and well adapted to the problem, but sometimes have convergence problems.rnrnFrom analysis of the Iterative Boltzmann Inversion, we elaborate a meaningful approximation of the structure and use it to derive a modification of the Levenberg-Marquardt method. We engage the latter for reconstruction of the interaction parameters from experimental data for liquid argon and nitrogen. We show that the modified method is stable, convergent and fast. Further, the singular value analysis of the structure and its approximation allows to determine the crucial interaction parameters, that is, to simplify the modeling of interactions. Therefore, our results build a rigorous bridge between the inverse problem from physics and the powerful solution tools from mathematics. rn
Resumo:
If you had perfect pitch and listened to a recording of the sounds a drum made when struck, could you determine the shape of the drum? This question is an example of an inverse problem; inverse problems arise in medical imaging, oil prospecting, spectroscopy, and many other fields. We’ll first discuss the analogous question in the simpler setting of plucking a string. Then we’ll tackle the problem for drums and see that there are some surprises. Finally, I will give a brief indication of how this problem relates to some of my recent research. The emphasis will be on the ideas rather than on the technical details, so there will be pretty pictures instead of equations.
Resumo:
Suppose that one observes pairs (x1,Y1), (x2,Y2), ..., (xn,Yn), where x1 < x2 < ... < xn are fixed numbers while Y1, Y2, ..., Yn are independent random variables with unknown distributions. The only assumption is that Median(Yi) = f(xi) for some unknown convex or concave function f. We present a confidence band for this regression function f using suitable multiscale sign tests. While the exact computation of this band seems to require O(n4) steps, good approximations can be obtained in O(n2) steps. In addition the confidence band is shown to have desirable asymptotic properties as the sample size n tends to infinity.
Resumo:
Marshall's (1970) lemma is an analytical result which implies root-n-consistency of the distribution function corresponding to the Grenander (1956) estimator of a non-decreasing probability density. The present paper derives analogous results for the setting of convex densities on [0,\infty).
Resumo:
Several strategies relying on kriging have recently been proposed for adaptively estimating contour lines and excursion sets of functions under severely limited evaluation budget. The recently released R package KrigInv 3 is presented and offers a sound implementation of various sampling criteria for those kinds of inverse problems. KrigInv is based on the DiceKriging package, and thus benefits from a number of options concerning the underlying kriging models. Six implemented sampling criteria are detailed in a tutorial and illustrated with graphical examples. Different functionalities of KrigInv are gradually explained. Additionally, two recently proposed criteria for batch-sequential inversion are presented, enabling advanced users to distribute function evaluations in parallel on clusters or clouds of machines. Finally, auxiliary problems are discussed. These include the fine tuning of numerical integration and optimization procedures used within the computation and the optimization of the considered criteria.
Resumo:
A probabilistic safety assessment (PSA) is being developed for a steam-methane reforming hydrogenproduction plant linked to a high-temperature gas-cooled nuclear reactor (HTGR). This work is based on the Japan Atomic Energy Research Institute's (JAERI) High Temperature Engineering Test Reactor (HTTR) prototype in Japan. The objective of this paper is to show how the PSA can be used for improving the design of the coupled plants. A simplified HAZOP study was performed to identify initiating events, based on existing studies. The results of the PSA show that the average frequency of an accident at this complex that could affect the population is 7 × 10−8 year−1 which is divided into the various end states. The dominant sequences are those that result in a methane explosion and occur with a frequency of 6.5 × 10−8 year−1, while the other sequences are much less frequent. The health risk presents itself if there are people in the vicinity who could be affected by the explosion. This analysis also demonstrates that an accident in one of the plants has little effect on the other. This is true given the design base distance between the plants, the fact that the reactor is underground, as well as other safety characteristics of the HTGR.