987 resultados para III-posed inverse problem


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A neural network procedure to solve inverse chemical kinetic problems is discussed in this work. Rate constants are calculated from the product concentration of an irreversible consecutive reaction: the hydrogenation of Citral molecule, a process with industrial interest. Simulated and experimental data are considered. Errors in the simulated data, up to 7% in the concentrations, were assumed to investigate the robustness of the inverse procedure. Also, the proposed method is compared with two common methods in nonlinear analysis; the Simplex and Levenberg-Marquardt approaches. In all situations investigated, the neural network approach was numerically stable and robust with respect to deviations in the initial conditions or experimental noises.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Methane combustion was studied by the Westbrook and Dryer model. This well-established simplified mechanism is very useful in combustion science, for computational effort can be notably reduced. In the inversion procedure to be studied, rate constants are obtained from [CO] concentration data. However, when inherent experimental errors in chemical concentrations are considered, an ill-conditioned inverse problem must be solved for which appropriate mathematical algorithms are needed. A recurrent neural network was chosen due to its numerical stability and robustness. The proposed methodology was compared against Simplex and Levenberg-Marquardt, the most used methods for optimization problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The image reconstruction using the EIT (Electrical Impedance Tomography) technique is a nonlinear and ill-posed inverse problem which demands a powerful direct or iterative method. A typical approach for solving the problem is to minimize an error functional using an iterative method. In this case, an initial solution close enough to the global minimum is mandatory to ensure the convergence to the correct minimum in an appropriate time interval. The aim of this paper is to present a new, simple and low cost technique (quadrant-searching) to reduce the search space and consequently to obtain an initial solution of the inverse problem of EIT. This technique calculates the error functional for four different contrast distributions placing a large prospective inclusion in the four quadrants of the domain. Comparing the four values of the error functional it is possible to get conclusions about the internal electric contrast. For this purpose, initially we performed tests to assess the accuracy of the BEM (Boundary Element Method) when applied to the direct problem of the EIT and to verify the behavior of error functional surface in the search space. Finally, numerical tests have been performed to verify the new technique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optimal state estimation from given observations of a dynamical system by data assimilation is generally an ill-posed inverse problem. In order to solve the problem, a standard Tikhonov, or L2, regularization is used, based on certain statistical assumptions on the errors in the data. The regularization term constrains the estimate of the state to remain close to a prior estimate. In the presence of model error, this approach does not capture the initial state of the system accurately, as the initial state estimate is derived by minimizing the average error between the model predictions and the observations over a time window. Here we examine an alternative L1 regularization technique that has proved valuable in image processing. We show that for examples of flow with sharp fronts and shocks, the L1 regularization technique performs more accurately than standard L2 regularization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Imaging technologies are widely used in application fields such as natural sciences, engineering, medicine, and life sciences. A broad class of imaging problems reduces to solve ill-posed inverse problems (IPs). Traditional strategies to solve these ill-posed IPs rely on variational regularization methods, which are based on minimization of suitable energies, and make use of knowledge about the image formation model (forward operator) and prior knowledge on the solution, but lack in incorporating knowledge directly from data. On the other hand, the more recent learned approaches can easily learn the intricate statistics of images depending on a large set of data, but do not have a systematic method for incorporating prior knowledge about the image formation model. The main purpose of this thesis is to discuss data-driven image reconstruction methods which combine the benefits of these two different reconstruction strategies for the solution of highly nonlinear ill-posed inverse problems. Mathematical formulation and numerical approaches for image IPs, including linear as well as strongly nonlinear problems are described. More specifically we address the Electrical impedance Tomography (EIT) reconstruction problem by unrolling the regularized Gauss-Newton method and integrating the regularization learned by a data-adaptive neural network. Furthermore we investigate the solution of non-linear ill-posed IPs introducing a deep-PnP framework that integrates the graph convolutional denoiser into the proximal Gauss-Newton method with a practical application to the EIT, a recently introduced promising imaging technique. Efficient algorithms are then applied to the solution of the limited electrods problem in EIT, combining compressive sensing techniques and deep learning strategies. Finally, a transformer-based neural network architecture is adapted to restore the noisy solution of the Computed Tomography problem recovered using the filtered back-projection method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In clinical practice, a classification of seizures based on clinical signs and symptoms leads to an improved understanding of epilepsy-related issues and therefore strongly contributes to a better patient care. The inverse problem involves inferring the anatomical brain localization of a seizure from the scalp surface EEG, a concept we apply here to correlate seizure origin with seizure semiology. The spheres of sensorium, motor features, consciousness changes and autonomic alterations during ictal and postictal manifestations are reviewed, including several subdivisions used to better categorize particular features. Particular attention is given to behavioral features, as well as to features occurring in idiopathic generalized epileptic syndromes and psychogenic nonepileptic spells.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The continuous wavelet transform is obtained as a maximumentropy solution of the corresponding inverse problem. It is well knownthat although a signal can be reconstructed from its wavelet transform,the expansion is not unique due to the redundancy of continuous wavelets.Hence, the inverse problem has no unique solution. If we want to recognizeone solution as "optimal", then an appropriate decision criterion hasto be adopted. We show here that the continuous wavelet transform is an"optimal" solution in a maximum entropy sense.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A maximum entropy statistical treatment of an inverse problem concerning frame theory is presented. The problem arises from the fact that a frame is an overcomplete set of vectors that defines a mapping with no unique inverse. Although any vector in the concomitant space can be expressed as a linear combination of frame elements, the coefficients of the expansion are not unique. Frame theory guarantees the existence of a set of coefficients which is “optimal” in a minimum norm sense. We show here that these coefficients are also “optimal” from a maximum entropy viewpoint.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The hydrological and biogeochemical processes that operate in catchments influence the ecological quality of freshwater systems through delivery of fine sediment, nutrients and organic matter. Most models that seek to characterise the delivery of diffuse pollutants from land to water are reductionist. The multitude of processes that are parameterised in such models to ensure generic applicability make them complex and difficult to test on available data. Here, we outline an alternative - data-driven - inverse approach. We apply SCIMAP, a parsimonious risk based model that has an explicit treatment of hydrological connectivity. we take a Bayesian approach to the inverse problem of determining the risk that must be assigned to different land uses in a catchment in order to explain the spatial patterns of measured in-stream nutrient concentrations. We apply the model to identify the key sources of nitrogen (N) and phosphorus (P) diffuse pollution risk in eleven UK catchments covering a range of landscapes. The model results show that: 1) some land use generates a consistently high or low risk of diffuse nutrient pollution; but 2) the risks associated with different land uses vary both between catchments and between nutrients; and 3) that the dominant sources of P and N risk in the catchment are often a function of the spatial configuration of land uses. Taken on a case-by-case basis, this type of inverse approach may be used to help prioritise the focus of interventions to reduce diffuse pollution risk for freshwater ecosystems. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BTES (borehole thermal energy storage)systems exchange thermal energy by conduction with the surrounding ground through borehole materials. The spatial variability of the geological properties and the space-time variability of hydrogeological conditions affect the real power rate of heat exchangers and, consequently, the amount of energy extracted from / injected into the ground. For this reason, it is not an easy task to identify the underground thermal properties to use when designing. At the current state of technology, Thermal Response Test (TRT) is the in situ test for the characterization of ground thermal properties with the higher degree of accuracy, but it doesn’t fully solve the problem of characterizing the thermal properties of a shallow geothermal reservoir, simply because it characterizes only the neighborhood of the heat exchanger at hand and only for the test duration. Different analytical and numerical models exist for the characterization of shallow geothermal reservoir, but they are still inadequate and not exhaustive: more sophisticated models must be taken into account and a geostatistical approach is needed to tackle natural variability and estimates uncertainty. The approach adopted for reservoir characterization is the “inverse problem”, typical of oil&gas field analysis. Similarly, we create different realizations of thermal properties by direct sequential simulation and we find the best one fitting real production data (fluid temperature along time). The software used to develop heat production simulation is FEFLOW 5.4 (Finite Element subsurface FLOW system). A geostatistical reservoir model has been set up based on literature thermal properties data and spatial variability hypotheses, and a real TRT has been tested. Then we analyzed and used as well two other codes (SA-Geotherm and FV-Geotherm) which are two implementation of the same numerical model of FEFLOW (Al-Khoury model).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The intensity of long-range correlations observed with the classical HMBC pulse sequence using static optimization of the long-range coupling delay is directly related to the size of the coupling constant and is often set as a compromise. As such, some long-range correlations might appear with a reduced intensity or might even be completely absent from the spectra. After a short introduction, this third manuscript will give a detailed review of some selected HMBC variants dedicated to improve the detection of long-range correlations, such as the ACCORD-HMBC, CIGAR-HMBC, and Broadband HMBC experiments. Practical details about the accordion optimization, which affords a substantial improvement in both the number and intensity of the long-range correlations observed, but introduces a modulation in F1, will be discussed. The incorporation of the so-called constant time variable delay in the CIGAR-HMBC experiment, which can trigger or even completely suppress 1H–1H coupling modulation inherent to the utilization of the accordion principle, will be also discussed. The broadband HMBC scheme, which consists of recording a series of HMBC spectra with different delays set as a function of the long-range heteronuclear coupling constant ranges and transverse relaxation times T2, is also examined.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The production of electron–positron pairs in time-dependent electric fields (Schwinger mechanism) depends non-linearly on the applied field profile. Accordingly, the resulting momentum spectrum is extremely sensitive to small variations of the field parameters. Owing to this non-linear dependence it is so far unpredictable how to choose a field configuration such that a predetermined momentum distribution is generated. We show that quantum kinetic theory along with optimal control theory can be used to approximately solve this inverse problem for Schwinger pair production. We exemplify this by studying the superposition of a small number of harmonic components resulting in predetermined signatures in the asymptotic momentum spectrum. In the long run, our results could facilitate the observation of this yet unobserved pair production mechanism in quantum electrodynamics by providing suggestions for tailored field configurations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is general agreement within the scientific community in considering Biology as the science with more potential to develop in the XXI century. This is due to several reasons, but probably the most important one is the state of development of the rest of experimental and technological sciences. In this context, there are a very rich variety of mathematical tools, physical techniques and computer resources that permit to do biological experiments that were unbelievable only a few years ago. Biology is nowadays taking advantage of all these newly developed technologies, which are been applied to life sciences opening new research fields and helping to give new insights in many biological problems. Consequently, biologists have improved a lot their knowledge in many key areas as human function and human diseases. However there is one human organ that is still barely understood compared with the rest: The human brain. The understanding of the human brain is one of the main challenges of the XXI century. In this regard, it is considered a strategic research field for the European Union and the USA. Thus, there is a big interest in applying new experimental techniques for the study of brain function. Magnetoencephalography (MEG) is one of these novel techniques that are currently applied for mapping the brain activity1. This technique has important advantages compared to the metabolic-based brain imagining techniques like Functional Magneto Resonance Imaging2 (fMRI). The main advantage is that MEG has a higher time resolution than fMRI. Another benefit of MEG is that it is a patient friendly clinical technique. The measure is performed with a wireless set up and the patient is not exposed to any radiation. Although MEG is widely applied in clinical studies, there are still open issues regarding data analysis. The present work deals with the solution of the inverse problem in MEG, which is the most controversial and uncertain part of the analysis process3. This question is addressed using several variations of a new solving algorithm based in a heuristic method. The performance of those methods is analyzed by applying them to several test cases with known solutions and comparing those solutions with the ones provided by our methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The seriousness of the current crisis urgently demands new economic thinking that breaks the austerity vs. deficit spending circle in economic policy. The core tenet of the paper is that the most important problems that natural and social science are facing today are inverse problems, and that a new approach that goes beyond optimization is necessary. The approach presented here is radical in the sense that it identifies the roots in key assumptions in economic theory such as optimal behavior and stability to provide an inverse thinking perspective to economic modeling, of use in economic and financial stability policy. The inverse problem provides a truly multidisciplinary platform where related problems from different disciplines can be studied under a common approach with comparable results.