983 resultados para Forward problem
Resumo:
This paper introduces a new reconstruction algorithm for electrical impedance tomography. The algorithm assumes that there are two separate regions of conductivity. These regions are represented as eccentric circles. This new algorithm then solves for the location of the eccentric circles. Due to the simple geometry of the forward problem, an analytic technique using conformal mapping and separation of variables has been employed. (C) 2002 John Wiley Sons, Inc.
Conventional and Reciprocal Approaches to the Forward and Inverse Problems of Electroencephalography
Resumo:
Le problème inverse en électroencéphalographie (EEG) est la localisation de sources de courant dans le cerveau utilisant les potentiels de surface sur le cuir chevelu générés par ces sources. Une solution inverse implique typiquement de multiples calculs de potentiels de surface sur le cuir chevelu, soit le problème direct en EEG. Pour résoudre le problème direct, des modèles sont requis à la fois pour la configuration de source sous-jacente, soit le modèle de source, et pour les tissues environnants, soit le modèle de la tête. Cette thèse traite deux approches bien distinctes pour la résolution du problème direct et inverse en EEG en utilisant la méthode des éléments de frontières (BEM): l’approche conventionnelle et l’approche réciproque. L’approche conventionnelle pour le problème direct comporte le calcul des potentiels de surface en partant de sources de courant dipolaires. D’un autre côté, l’approche réciproque détermine d’abord le champ électrique aux sites des sources dipolaires quand les électrodes de surfaces sont utilisées pour injecter et retirer un courant unitaire. Le produit scalaire de ce champ électrique avec les sources dipolaires donne ensuite les potentiels de surface. L’approche réciproque promet un nombre d’avantages par rapport à l’approche conventionnelle dont la possibilité d’augmenter la précision des potentiels de surface et de réduire les exigences informatiques pour les solutions inverses. Dans cette thèse, les équations BEM pour les approches conventionnelle et réciproque sont développées en utilisant une formulation courante, la méthode des résidus pondérés. La réalisation numérique des deux approches pour le problème direct est décrite pour un seul modèle de source dipolaire. Un modèle de tête de trois sphères concentriques pour lequel des solutions analytiques sont disponibles est utilisé. Les potentiels de surfaces sont calculés aux centroïdes ou aux sommets des éléments de discrétisation BEM utilisés. La performance des approches conventionnelle et réciproque pour le problème direct est évaluée pour des dipôles radiaux et tangentiels d’excentricité variable et deux valeurs très différentes pour la conductivité du crâne. On détermine ensuite si les avantages potentiels de l’approche réciproquesuggérés par les simulations du problème direct peuvent êtres exploités pour donner des solutions inverses plus précises. Des solutions inverses à un seul dipôle sont obtenues en utilisant la minimisation par méthode du simplexe pour à la fois l’approche conventionnelle et réciproque, chacun avec des versions aux centroïdes et aux sommets. Encore une fois, les simulations numériques sont effectuées sur un modèle à trois sphères concentriques pour des dipôles radiaux et tangentiels d’excentricité variable. La précision des solutions inverses des deux approches est comparée pour les deux conductivités différentes du crâne, et leurs sensibilités relatives aux erreurs de conductivité du crâne et au bruit sont évaluées. Tandis que l’approche conventionnelle aux sommets donne les solutions directes les plus précises pour une conductivité du crâne supposément plus réaliste, les deux approches, conventionnelle et réciproque, produisent de grandes erreurs dans les potentiels du cuir chevelu pour des dipôles très excentriques. Les approches réciproques produisent le moins de variations en précision des solutions directes pour différentes valeurs de conductivité du crâne. En termes de solutions inverses pour un seul dipôle, les approches conventionnelle et réciproque sont de précision semblable. Les erreurs de localisation sont petites, même pour des dipôles très excentriques qui produisent des grandes erreurs dans les potentiels du cuir chevelu, à cause de la nature non linéaire des solutions inverses pour un dipôle. Les deux approches se sont démontrées également robustes aux erreurs de conductivité du crâne quand du bruit est présent. Finalement, un modèle plus réaliste de la tête est obtenu en utilisant des images par resonace magnétique (IRM) à partir desquelles les surfaces du cuir chevelu, du crâne et du cerveau/liquide céphalorachidien (LCR) sont extraites. Les deux approches sont validées sur ce type de modèle en utilisant des véritables potentiels évoqués somatosensoriels enregistrés à la suite de stimulation du nerf médian chez des sujets sains. La précision des solutions inverses pour les approches conventionnelle et réciproque et leurs variantes, en les comparant à des sites anatomiques connus sur IRM, est encore une fois évaluée pour les deux conductivités différentes du crâne. Leurs avantages et inconvénients incluant leurs exigences informatiques sont également évalués. Encore une fois, les approches conventionnelle et réciproque produisent des petites erreurs de position dipolaire. En effet, les erreurs de position pour des solutions inverses à un seul dipôle sont robustes de manière inhérente au manque de précision dans les solutions directes, mais dépendent de l’activité superposée d’autres sources neurales. Contrairement aux attentes, les approches réciproques n’améliorent pas la précision des positions dipolaires comparativement aux approches conventionnelles. Cependant, des exigences informatiques réduites en temps et en espace sont les avantages principaux des approches réciproques. Ce type de localisation est potentiellement utile dans la planification d’interventions neurochirurgicales, par exemple, chez des patients souffrant d’épilepsie focale réfractaire qui ont souvent déjà fait un EEG et IRM.
Resumo:
Electrical impedance tomography (EIT) captures images of internal features of a body. Electrodes are attached to the boundary of the body, low intensity alternating currents are applied, and the resulting electric potentials are measured. Then, based on the measurements, an estimation algorithm obtains the three-dimensional internal admittivity distribution that corresponds to the image. One of the main goals of medical EIT is to achieve high resolution and an accurate result at low computational cost. However, when the finite element method (FEM) is employed and the corresponding mesh is refined to increase resolution and accuracy, the computational cost increases substantially, especially in the estimation of absolute admittivity distributions. Therefore, we consider in this work a fast iterative solver for the forward problem, which was previously reported in the context of structural optimization. We propose several improvements to this solver to increase its performance in the EIT context. The solver is based on the recycling of approximate invariant subspaces, and it is applied to reduce the EIT computation time for a constant and high resolution finite element mesh. In addition, we consider a powerful preconditioner and provide a detailed pseudocode for the improved iterative solver. The numerical results show the effectiveness of our approach: the proposed algorithm is faster than the preconditioned conjugate gradient (CG) algorithm. The results also show that even on a standard PC without parallelization, a high mesh resolution (more than 150,000 degrees of freedom) can be used for image estimation at a relatively low computational cost. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Le travail a été réalisé en collaboration avec le laboratoire de mécanique acoustique de Marseille, France. Les simulations ont été menées avec les langages Matlab et C. Ce projet s'inscrit dans le champ de recherche dénommé caractérisation tissulaire par ultrasons.
Resumo:
We present a technique for the rapid and reliable evaluation of linear-functional output of elliptic partial differential equations with affine parameter dependence. The essential components are (i) rapidly uniformly convergent reduced-basis approximations — Galerkin projection onto a space WN spanned by solutions of the governing partial differential equation at N (optimally) selected points in parameter space; (ii) a posteriori error estimation — relaxations of the residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs; and (iii) offline/online computational procedures — stratagems that exploit affine parameter dependence to de-couple the generation and projection stages of the approximation process. The operation count for the online stage — in which, given a new parameter value, we calculate the output and associated error bound — depends only on N (typically small) and the parametric complexity of the problem. The method is thus ideally suited to the many-query and real-time contexts. In this paper, based on the technique we develop a robust inverse computational method for very fast solution of inverse problems characterized by parametrized partial differential equations. The essential ideas are in three-fold: first, we apply the technique to the forward problem for the rapid certified evaluation of PDE input-output relations and associated rigorous error bounds; second, we incorporate the reduced-basis approximation and error bounds into the inverse problem formulation; and third, rather than regularize the goodness-of-fit objective, we may instead identify all (or almost all, in the probabilistic sense) system configurations consistent with the available experimental data — well-posedness is reflected in a bounded "possibility region" that furthermore shrinks as the experimental error is decreased.
Resumo:
O raio conectando dois pontos em um meio anisotrópico, homogêneo por partes e com variação lateral, é calculado utilizando-se técnicas de continuação em 3D. Se combinado com algoritmos para solução do problema de valor inicial, o método pode ser estendido para o cálculo de eventos qS1 e qS2. O algoritmo apresenta a mesma eficiência e robustez que implementações de técnicas de continuação em meios isotrópicos. Rotinas baseadas neste algoritmo têm várias aplicações de interesse. Primeiramente, na modelagem e inversão de parâmetros elásticos na presença de anisotropia. Em segundo lugar, as iterações de Newton-Raphson produzem atributos da frente de onda como vetor vagarosidade e a matrix hessiana do tempo de trânsito, quantidades que permitem determinar o espalhamento geométrico e aproximações de segunda ordem para o tempo de trânsito. Estes atributos permitem calcular as amplitudes ao longo do raio e investigar os efeitos da anisotropia no empilhamento CRS em modelos de velocidade simples.
Resumo:
Na produção de petróleo é importante o monitoramento dos parâmetros do reservatório (permeabilidade, porosidade, saturação, pressão, etc) para o seu posterior gerenciamento. A variação dos parâmetros dinâmicos do reservatório induz variações na dinâmica do fluxo no reservatório, como por exemplo, perdas na pressão, dificultando o processo de extração do óleo. A injeção de fluidos aumenta a energia interna do reservatório e incrementa a pressão, estimulando o movimento do óleo em direção aos poços de extração. A tomografia eletromagnética poço-a-poço pode se tomar em uma técnica bastante eficaz no monitoramento dos processos de injeção, considerando-se o fato de ser altamente detectável a percolação de fluidos condutivos através das rochas. Esta tese apresenta o resultado de um algoritmo de tomografia eletromagnética bastante eficaz aplicado a dados sintéticos. O esquema de imageamento assume uma simetria cilíndrica em torno de uma fonte constituída por um dipolo magnético. Durante o processo de imageamento foram usados 21 transmissores e 21 receptores distribuídos em dois poços distanciados de 100 metros. O problema direto foi resolvido pelo método dos elementos finitos aplicado à equação de Helmhotz do campo elétrico secundário. O algoritmo resultante é válido para qualquer situação, não estando sujeito às restrições impostas aos algoritmos baseados nas aproximações de Born e Rytov. Por isso, pode ser aplicado eficientemente em qualquer situação, como em meios com contrastes de condutividade elétrica variando de 2 a 100, freqüências de 0.1 a 1000.0 kHz e heterogeneidades de qualquer dimensão. O problema inverso foi resolvido por intermédio do algoritmo de Marquardt estabilizado. A solução é obtida iterativamente. Os dados invertidos, com ruído Gaussiano aditivo, são as componentes em fase e em quadratura do campo magnético vertical. Sem o uso de vínculos o problema é totalmente instável, resultando em imagens completamente borradas. Duas categorias de vínculos foram usadas: vínculos relativos, do tipo suavidade, e vínculos absolutos. Os resultados obtidos mostram a eficiência desses dois tipos de vínculos através de imagens nítidas de alta resolução. Os tomogramas mostram que a resolução é melhor na direção vertical do que na horizontal e que é também função da freqüência. A posição e a atitude da heterogeneidade é bem recuperada. Ficou também demonstrado que a baixa resolução horizontal pode ser atenuada ou até mesmo eliminada por intermédio dos vínculos.
Resumo:
A simulação de uma seção sísmica de afastamento-nulo (ZO) a partir de dados de cobertura múltipla para um meio 2-D, através do empilhamento, é um método de imageamento de reflexão sísmica muito utilizado, que permite reduzir a quantidade de dados e melhorar a relação sinal/ruído. Segundo Berkovitch et al. (1999) o método Multifoco está baseado na Teoria do Imageamento Homeomórfico e consiste em empilhar dados de cobertura múltipla com distribuição fonte-receptor arbitrária de acordo com uma nova correção de sobretempo, chamada Multifoco. Esta correção de sobretempo esta baseada numa aproximação esférica local da frente de onda focalizante na vizinhança da superfície da terra. Este método permite construir uma seção sísmica no domínio do tempo de afastamento nulo aumentando a relação sinal/ruído. A técnica Multifoco não necessita do conhecimento a priori de um macro-modelo de velocidades. Três parâmetros são usados para descrever a aproximação de tempo de trânsito, Multifoco, os quais são: 1) o ângulo de emergência do raio de afastamento nulo ou raio de reflexão normal (β0), 2) a curvatura da frente de onda no Ponto de Incidência Normal (RNIP) e 3) curvatura da frente de Onda Normal (RN). Sendo também necessário a velocidade próximo a superfície da terra. Neste trabalho de tese aplico esta técnica de empilhamento Multifoco para dados de cobertura múltipla referidos a modelos de velocidade constante e modelo heterogêneos, com o objetivo de simular seções sísmicas afastamento-nulo. Neste caso, como se trata da solução de um problema direto, o macro-modelo de velocidades é considerado conhecido a priori. No contexto do problema inverso tem-se que os parâmetros RNIP, RN e β0 podem ser determinados a partir da análise de coerência aplicada aos dados sísmicos de múltipla cobertura. Na solução deste problema a função objetivo, a ser otimizada, é definida pelo cálculo da máxima coerência existente entre os dados na superfície de empilhamento sísmico. Neste trabalho de tese nos discutimos a sensibilidade da aproximação do tempo de trânsito usado no empilhamento Multifoco, como uma função dos parâmetros RNIP, RN e β0. Esta análise de sensibilidade é feita de três diferentes modos: 1) a primeira derivada da função objetivo, 2) a medida de coerência, denominada semelhança, e 3) a sensibilidade no Empilhamento Multifoco.
Resumo:
Die medizinische Forschung hat gezeigt, daß die frühzeitigeErkennung des Brustkrebses die Heilungschancen entscheidendverbessert. Die vorliegende Arbeit befaßt sich mit demVersuch, dieses Vorhaben durch die elektrischeImpedanztomographie zu realisieren. Dazu werden an der BrustElektroden angebracht, über die Strom in die Brust fließt.Zu verschiedenen Stromkonfigurationen werden die jeweilsresultierenden Potentialverteilungen gemessen, wasRückschlüsse auf die Leitfähigkeitsverteilung zuläßt. Tumorelassen sich somit aufgrund ihrer im Vergleich zu normalemGewebe anderen elektrischen Eigenschaften lokalisieren. Diese Dissertation beschreibt verschiedene in der Praxisdenkbare Vorgehensweisen. Das Problem wird in zwei und indrei Dimensionen mit ein oder mehreren Stromkonfigurationen untersucht. Im zweidimensionalen Fall werden Schnittbildervon Objekten in einem Versuchstank mit realistischenMeßdaten berechnet. Alle Untersuchungen in drei Dimensionenbeziehen sich auf künstlich generierte Daten. Ein weiterer Schwerpunkt der Arbeit liegt in der Bestimmungdes Stromflusses durch Elektroden in drei Dimensionen. DieKenntnis dieser Stromdichten ist für eine präzise Behandlungdes Vorwärtsproblems von großer Bedeutung.
Resumo:
This work sets out to evaluate the potential benefits and pit-falls in using a priori information to help solve the Magnetoencephalographic (MEG) inverse problem. In chapter one the forward problem in MEG is introduced, together with a scheme that demonstrates how a priori information can be incorporated into the inverse problem. Chapter two contains a literature review of techniques currently used to solve the inverse problem. Emphasis is put on the kind of a priori information that is used by each of these techniques and the ease with which additional constraints can be applied. The formalism of the FOCUSS algorithm is shown to allow for the incorporation of a priori information in an insightful and straightforward manner. In chapter three it is described how anatomical constraints, in the form of a realistically shaped source space, can be extracted from a subject’s Magnetic Resonance Image (MRI). The use of such constraints relies on accurate co-registration of the MEG and MRI co-ordinate systems. Variations of the two main co-registration approaches, based on fiducial markers or on surface matching, are described and the accuracy and robustness of a surface matching algorithm is evaluated. Figures of merit introduced in chapter four are shown to given insight into the limitations of a typical measurement set-up and potential value of a priori information. It is shown in chapter five that constrained dipole fitting and FOCUSS outperform unconstrained dipole fitting when data with low SNR is used. However, the effect of errors in the constraints can reduce this advantage. Finally, it is demonstrated in chapter six that the results of different localisation techniques give corroborative evidence about the location and activation sequence of the human visual cortical areas underlying the first 125ms of the visual magnetic evoked response recorded with a whole head neuromagnetometer.
Resumo:
Methods of solving the neuro-electromagnetic inverse problem are examined and developed, with specific reference to the human visual cortex. The anatomy, physiology and function of the human visual system are first reviewed. Mechanisms by which the visual cortex gives rise to external electric and magnetic fields are then discussed, and the forward problem is described mathematically for the case of an isotropic, piecewise homogeneous volume conductor, and then for an anisotropic, concentric, spherical volume conductor. Methods of solving the inverse problem are reviewed, before a new technique is presented. This technique combines prior anatomical information gained from stereotaxic studies, with a probabilistic distributed-source algorithm to yield accurate, realistic inverse solutions. The solution accuracy is enhanced by using both visual evoked electric and magnetic responses simultaneously. The numerical algorithm is then modified to perform equivalent current dipole fitting and minimum norm estimation, and these three techniques are implemented on a transputer array for fast computation. Due to the linear nature of the techniques, they can be executed on up to 22 transputers with close to linear speedup. The latter part of the thesis describes the application of the inverse methods to the analysis of visual evoked electric and magnetic responses. The CIIm peak of the pattern onset evoked magnetic response is deduced to be a product of current flowing away from the surface areas 17, 18 and 19, while the pattern reversal P100m response originates in the same areas, but from oppositely directed current. Cortical retinotopy is examined using sectorial stimuli, the CI and CIm ;peaks of the pattern onset electric and magnetic responses are found to originate from areas V1 and V2 simultaneously, and they therefore do not conform to a simple cruciform model of primary visual cortex.
Resumo:
One of the most pressing demands on electrophysiology applied to the diagnosis of epilepsy is the non-invasive localization of the neuronal generators responsible for brain electrical and magnetic fields (the so-called inverse problem). These neuronal generators produce primary currents in the brain, which together with passive currents give rise to the EEG signal. Unfortunately, the signal we measure on the scalp surface doesn't directly indicate the location of the active neuronal assemblies. This is the expression of the ambiguity of the underlying static electromagnetic inverse problem, partly due to the relatively limited number of independent measures available. A given electric potential distribution recorded at the scalp can be explained by the activity of infinite different configurations of intracranial sources. In contrast, the forward problem, which consists of computing the potential field at the scalp from known source locations and strengths with known geometry and conductivity properties of the brain and its layers (CSF/meninges, skin and skull), i.e. the head model, has a unique solution. The head models vary from the computationally simpler spherical models (three or four concentric spheres) to the realistic models based on the segmentation of anatomical images obtained using magnetic resonance imaging (MRI). Realistic models – computationally intensive and difficult to implement – can separate different tissues of the head and account for the convoluted geometry of the brain and the significant inter-individual variability. In real-life applications, if the assumptions of the statistical, anatomical or functional properties of the signal and the volume in which it is generated are meaningful, a true three-dimensional tomographic representation of sources of brain electrical activity is possible in spite of the ‘ill-posed’ nature of the inverse problem (Michel et al., 2004). The techniques used to achieve this are now referred to as electrical source imaging (ESI) or magnetic source imaging (MSI). The first issue to influence reconstruction accuracy is spatial sampling, i.e. the number of EEG electrodes. It has been shown that this relationship is not linear, reaching a plateau at about 128 electrodes, provided spatial distribution is uniform. The second factor is related to the different properties of the source localization strategies used with respect to the hypothesized source configuration.
Resumo:
Scientific curiosity, exploration of georesources and environmental concerns are pushing the geoscientific research community toward subsurface investigations of ever-increasing complexity. This review explores various approaches to formulate and solve inverse problems in ways that effectively integrate geological concepts with geophysical and hydrogeological data. Modern geostatistical simulation algorithms can produce multiple subsurface realizations that are in agreement with conceptual geological models and statistical rock physics can be used to map these realizations into physical properties that are sensed by the geophysical or hydrogeological data. The inverse problem consists of finding one or an ensemble of such subsurface realizations that are in agreement with the data. The most general inversion frameworks are presently often computationally intractable when applied to large-scale problems and it is necessary to better understand the implications of simplifying (1) the conceptual geological model (e.g., using model compression); (2) the physical forward problem (e.g., using proxy models); and (3) the algorithm used to solve the inverse problem (e.g., Markov chain Monte Carlo or local optimization methods) to reach practical and robust solutions given today's computer resources and knowledge. We also highlight the need to not only use geophysical and hydrogeological data for parameter estimation purposes, but also to use them to falsify or corroborate alternative geological scenarios.
Resumo:
This paper presents a biased random-key genetic algorithm for the resource constrained project scheduling problem. The chromosome representation of the problem is based on random keys. Active schedules are constructed using a priority-rule heuristic in which the priorities of the activities are defined by the genetic algorithm. A forward-backward improvement procedure is applied to all solutions. The chromosomes supplied by the genetic algorithm are adjusted to reflect the solutions obtained by the improvement procedure. The heuristic is tested on a set of standard problems taken from the literature and compared with other approaches. The computational results validate the effectiveness of the proposed algorithm.
Resumo:
Editorial