99 resultados para Inverse Problem
Resumo:
In clinical practice, a classification of seizures based on clinical signs and symptoms leads to an improved understanding of epilepsy-related issues and therefore strongly contributes to a better patient care. The inverse problem involves inferring the anatomical brain localization of a seizure from the scalp surface EEG, a concept we apply here to correlate seizure origin with seizure semiology. The spheres of sensorium, motor features, consciousness changes and autonomic alterations during ictal and postictal manifestations are reviewed, including several subdivisions used to better categorize particular features. Particular attention is given to behavioral features, as well as to features occurring in idiopathic generalized epileptic syndromes and psychogenic nonepileptic spells.
Resumo:
The hydrological and biogeochemical processes that operate in catchments influence the ecological quality of freshwater systems through delivery of fine sediment, nutrients and organic matter. Most models that seek to characterise the delivery of diffuse pollutants from land to water are reductionist. The multitude of processes that are parameterised in such models to ensure generic applicability make them complex and difficult to test on available data. Here, we outline an alternative - data-driven - inverse approach. We apply SCIMAP, a parsimonious risk based model that has an explicit treatment of hydrological connectivity. we take a Bayesian approach to the inverse problem of determining the risk that must be assigned to different land uses in a catchment in order to explain the spatial patterns of measured in-stream nutrient concentrations. We apply the model to identify the key sources of nitrogen (N) and phosphorus (P) diffuse pollution risk in eleven UK catchments covering a range of landscapes. The model results show that: 1) some land use generates a consistently high or low risk of diffuse nutrient pollution; but 2) the risks associated with different land uses vary both between catchments and between nutrients; and 3) that the dominant sources of P and N risk in the catchment are often a function of the spatial configuration of land uses. Taken on a case-by-case basis, this type of inverse approach may be used to help prioritise the focus of interventions to reduce diffuse pollution risk for freshwater ecosystems. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Scientific curiosity, exploration of georesources and environmental concerns are pushing the geoscientific research community toward subsurface investigations of ever-increasing complexity. This review explores various approaches to formulate and solve inverse problems in ways that effectively integrate geological concepts with geophysical and hydrogeological data. Modern geostatistical simulation algorithms can produce multiple subsurface realizations that are in agreement with conceptual geological models and statistical rock physics can be used to map these realizations into physical properties that are sensed by the geophysical or hydrogeological data. The inverse problem consists of finding one or an ensemble of such subsurface realizations that are in agreement with the data. The most general inversion frameworks are presently often computationally intractable when applied to large-scale problems and it is necessary to better understand the implications of simplifying (1) the conceptual geological model (e.g., using model compression); (2) the physical forward problem (e.g., using proxy models); and (3) the algorithm used to solve the inverse problem (e.g., Markov chain Monte Carlo or local optimization methods) to reach practical and robust solutions given today's computer resources and knowledge. We also highlight the need to not only use geophysical and hydrogeological data for parameter estimation purposes, but also to use them to falsify or corroborate alternative geological scenarios.
Resumo:
Over the past decade, significant interest has been expressed in relating the spatial statistics of surface-based reflection ground-penetrating radar (GPR) data to those of the imaged subsurface volume. A primary motivation for this work is that changes in the radar wave velocity, which largely control the character of the observed data, are expected to be related to corresponding changes in subsurface water content. Although previous work has indeed indicated that the spatial statistics of GPR images are linked to those of the water content distribution of the probed region, a viable method for quantitatively analyzing the GPR data and solving the corresponding inverse problem has not yet been presented. Here we address this issue by first deriving a relationship between the 2-D autocorrelation of a water content distribution and that of the corresponding GPR reflection image. We then show how a Bayesian inversion strategy based on Markov chain Monte Carlo sampling can be used to estimate the posterior distribution of subsurface correlation model parameters that are consistent with the GPR data. Our results indicate that if the underlying assumptions are valid and we possess adequate prior knowledge regarding the water content distribution, in particular its vertical variability, this methodology allows not only for the reliable recovery of lateral correlation model parameters but also for estimates of parameter uncertainties. In the case where prior knowledge regarding the vertical variability of water content is not available, the results show that the methodology still reliably recovers the aspect ratio of the heterogeneity.
Resumo:
AbstractFor a wide range of environmental, hydrological, and engineering applications there is a fast growing need for high-resolution imaging. In this context, waveform tomographic imaging of crosshole georadar data is a powerful method able to provide images of pertinent electrical properties in near-surface environments with unprecedented spatial resolution. In contrast, conventional ray-based tomographic methods, which consider only a very limited part of the recorded signal (first-arrival traveltimes and maximum first-cycle amplitudes), suffer from inherent limitations in resolution and may prove to be inadequate in complex environments. For a typical crosshole georadar survey the potential improvement in resolution when using waveform-based approaches instead of ray-based approaches is in the range of one order-of- magnitude. Moreover, the spatial resolution of waveform-based inversions is comparable to that of common logging methods. While in exploration seismology waveform tomographic imaging has become well established over the past two decades, it is comparably still underdeveloped in the georadar domain despite corresponding needs. Recently, different groups have presented finite-difference time-domain waveform inversion schemes for crosshole georadar data, which are adaptations and extensions of Tarantola's seminal nonlinear generalized least-squares approach developed for the seismic case. First applications of these new crosshole georadar waveform inversion schemes on synthetic and field data have shown promising results. However, there is little known about the limits and performance of such schemes in complex environments. To this end, the general motivation of my thesis is the evaluation of the robustness and limitations of waveform inversion algorithms for crosshole georadar data in order to apply such schemes to a wide range of real world problems.One crucial issue to making applicable and effective any waveform scheme to real-world crosshole georadar problems is the accurate estimation of the source wavelet, which is unknown in reality. Waveform inversion schemes for crosshole georadar data require forward simulations of the wavefield in order to iteratively solve the inverse problem. Therefore, accurate knowledge of the source wavelet is critically important for successful application of such schemes. Relatively small differences in the estimated source wavelet shape can lead to large differences in the resulting tomograms. In the first part of my thesis, I explore the viability and robustness of a relatively simple iterative deconvolution technique that incorporates the estimation of the source wavelet into the waveform inversion procedure rather than adding additional model parameters into the inversion problem. Extensive tests indicate that this source wavelet estimation technique is simple yet effective, and is able to provide remarkably accurate and robust estimates of the source wavelet in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity as well as significant ambient noise in the recorded data. Furthermore, our tests also indicate that the approach is insensitive to the phase characteristics of the starting wavelet, which is not the case when directly incorporating the wavelet estimation into the inverse problem.Another critical issue with crosshole georadar waveform inversion schemes which clearly needs to be investigated is the consequence of the common assumption of frequency- independent electromagnetic constitutive parameters. This is crucial since in reality, these parameters are known to be frequency-dependent and complex and thus recorded georadar data may show significant dispersive behaviour. In particular, in the presence of water, there is a wide body of evidence showing that the dielectric permittivity can be significantly frequency dependent over the GPR frequency range, due to a variety of relaxation processes. The second part of my thesis is therefore dedicated to the evaluation of the reconstruction limits of a non-dispersive crosshole georadar waveform inversion scheme in the presence of varying degrees of dielectric dispersion. I show that the inversion algorithm, combined with the iterative deconvolution-based source wavelet estimation procedure that is partially able to account for the frequency-dependent effects through an "effective" wavelet, performs remarkably well in weakly to moderately dispersive environments and has the ability to provide adequate tomographic reconstructions.
Resumo:
Numerous sources of evidence point to the fact that heterogeneity within the Earth's deep crystalline crust is complex and hence may be best described through stochastic rather than deterministic approaches. As seismic reflection imaging arguably offers the best means of sampling deep crustal rocks in situ, much interest has been expressed in using such data to characterize the stochastic nature of crustal heterogeneity. Previous work on this problem has shown that the spatial statistics of seismic reflection data are indeed related to those of the underlying heterogeneous seismic velocity distribution. As of yet, however, the nature of this relationship has remained elusive due to the fact that most of the work was either strictly empirical or based on incorrect methodological approaches. Here, we introduce a conceptual model, based on the assumption of weak scattering, that allows us to quantitatively link the second-order statistics of a 2-D seismic velocity distribution with those of the corresponding processed and depth-migrated seismic reflection image. We then perform a sensitivity study in order to investigate what information regarding the stochastic model parameters describing crustal velocity heterogeneity might potentially be recovered from the statistics of a seismic reflection image using this model. Finally, we present a Monte Carlo inversion strategy to estimate these parameters and we show examples of its application at two different source frequencies and using two different sets of prior information. Our results indicate that the inverse problem is inherently non-unique and that many different combinations of the vertical and lateral correlation lengths describing the velocity heterogeneity can yield seismic images with the same 2-D autocorrelation structure. The ratio of all of these possible combinations of vertical and lateral correlation lengths, however, remains roughly constant which indicates that, without additional prior information, the aspect ratio is the only parameter describing the stochastic seismic velocity structure that can be reliably recovered.
Resumo:
Fetal MRI reconstruction aims at finding a high-resolution image given a small set of low-resolution images. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has considered several regularization terms s.a. Dirichlet/Laplacian energy, Total Variation (TV)- based energies and more recently non-local means. Although TV energies are quite attractive because of their ability in edge preservation, standard explicit steepest gradient techniques have been applied to optimize fetal-based TV energies. The main contribution of this work lies in the introduction of a well-posed TV algorithm from the point of view of convex optimization. Specifically, our proposed TV optimization algorithm or fetal reconstruction is optimal w.r.t. the asymptotic and iterative convergence speeds O(1/n2) and O(1/√ε), while existing techniques are in O(1/n2) and O(1/√ε). We apply our algorithm to (1) clinical newborn data, considered as ground truth, and (2) clinical fetal acquisitions. Our algorithm compares favorably with the literature in terms of speed and accuracy.
Resumo:
Quantifying the spatial configuration of hydraulic conductivity (K) in heterogeneous geological environments is essential for accurate predictions of contaminant transport, but is difficult because of the inherent limitations in resolution and coverage associated with traditional hydrological measurements. To address this issue, we consider crosshole and surface-based electrical resistivity geophysical measurements, collected in time during a saline tracer experiment. We use a Bayesian Markov-chain-Monte-Carlo (McMC) methodology to jointly invert the dynamic resistivity data, together with borehole tracer concentration data, to generate multiple posterior realizations of K that are consistent with all available information. We do this within a coupled inversion framework, whereby the geophysical and hydrological forward models are linked through an uncertain relationship between electrical resistivity and concentration. To minimize computational expense, a facies-based subsurface parameterization is developed. The Bayesian-McMC methodology allows us to explore the potential benefits of including the geophysical data into the inverse problem by examining their effect on our ability to identify fast flowpaths in the subsurface, and their impact on hydrological prediction uncertainty. Using a complex, geostatistically generated, two-dimensional numerical example representative of a fluvial environment, we demonstrate that flow model calibration is improved and prediction error is decreased when the electrical resistivity data are included. The worth of the geophysical data is found to be greatest for long spatial correlation lengths of subsurface heterogeneity with respect to wellbore separation, where flow and transport are largely controlled by highly connected flowpaths.
Resumo:
Although fetal anatomy can be adequately viewed in new multi-slice MR images, many critical limitations remain for quantitative data analysis. To this end, several research groups have recently developed advanced image processing methods, often denoted by super-resolution (SR) techniques, to reconstruct from a set of clinical low-resolution (LR) images, a high-resolution (HR) motion-free volume. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has been quite attracted by Total Variation energies because of their ability in edge preserving but only standard explicit steepest gradient techniques have been applied for optimization. In a preliminary work, it has been shown that novel fast convex optimization techniques could be successfully applied to design an efficient Total Variation optimization algorithm for the super-resolution problem. In this work, two major contributions are presented. Firstly, we will briefly review the Bayesian and Variational dual formulations of current state-of-the-art methods dedicated to fetal MRI reconstruction. Secondly, we present an extensive quantitative evaluation of our SR algorithm previously introduced on both simulated fetal and real clinical data (with both normal and pathological subjects). Specifically, we study the robustness of regularization terms in front of residual registration errors and we also present a novel strategy for automatically select the weight of the regularization as regards the data fidelity term. Our results show that our TV implementation is highly robust in front of motion artifacts and that it offers the best trade-off between speed and accuracy for fetal MRI recovery as in comparison with state-of-the art methods.
Resumo:
Fetal MRI reconstruction aims at finding a high-resolution image given a small set of low-resolution images. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has considered several regularization terms s.a. Dirichlet/Laplacian energy [1], Total Variation (TV)based energies [2,3] and more recently non-local means [4]. Although TV energies are quite attractive because of their ability in edge preservation, standard explicit steepest gradient techniques have been applied to optimize fetal-based TV energies. The main contribution of this work lies in the introduction of a well-posed TV algorithm from the point of view of convex optimization. Specifically, our proposed TV optimization algorithm for fetal reconstruction is optimal w.r.t. the asymptotic and iterative convergence speeds O(1/n(2)) and O(1/root epsilon), while existing techniques are in O(1/n) and O(1/epsilon). We apply our algorithm to (1) clinical newborn data, considered as ground truth, and (2) clinical fetal acquisitions. Our algorithm compares favorably with the literature in terms of speed and accuracy.
Resumo:
The goal of the present work was assess the feasibility of using a pseudo-inverse and null-space optimization approach in the modeling of the shoulder biomechanics. The method was applied to a simplified musculoskeletal shoulder model. The mechanical system consisted in the arm, and the external forces were the arm weight, 6 scapulo-humeral muscles and the reaction at the glenohumeral joint, which was considered as a spherical joint. The muscle wrapping was considered around the humeral head assumed spherical. The dynamical equations were solved in a Lagrangian approach. The mathematical redundancy of the mechanical system was solved in two steps: a pseudo-inverse optimization to minimize the square of the muscle stress and a null-space optimization to restrict the muscle force to physiological limits. Several movements were simulated. The mathematical and numerical aspects of the constrained redundancy problem were efficiently solved by the proposed method. The prediction of muscle moment arms was consistent with cadaveric measurements and the joint reaction force was consistent with in vivo measurements. This preliminary work demonstrated that the developed algorithm has a great potential for more complex musculoskeletal modeling of the shoulder joint. In particular it could be further applied to a non-spherical joint model, allowing for the natural translation of the humeral head in the glenoid fossa.
Resumo:
In this paper we propose a stabilized conforming finite volume element method for the Stokes equations. On stating the convergence of the method, optimal a priori error estimates in different norms are obtained by establishing the adequate connection between the finite volume and stabilized finite element formulations. A superconvergence result is also derived by using a postprocessing projection method. In particular, the stabilization of the continuous lowest equal order pair finite volume element discretization is achieved by enriching the velocity space with local functions that do not necessarily vanish on the element boundaries. Finally, some numerical experiments that confirm the predicted behavior of the method are provided.
Resumo:
The α(1b)-adrenergic receptor (AR) was, after rhodopsin, the first G protein-coupled receptor (GPCR) in which point mutations were shown to trigger constitutive (agonist-independent) activity. Constitutively activating mutations have been found in other AR subtypes as well as in several GPCRs. This chapter briefly summarizes the main findings on constitutively active mutants of the α(1a)- and α(1b)-AR subtypes and the methods used to predict activating mutations, to measure constitutive activity of Gq-coupled receptors and to investigate inverse agonism. In addition, it highlights the implications of studies on constitutively active AR mutants on elucidating the molecular mechanisms of receptor activation and drug action.