987 resultados para III-posed inverse problem
Resumo:
Visual evoked magnetic responses were recorded to full-field and left and right half-field stimulation with three check sizes (70′, 34′ and 22′) in five normal subjects. Recordings were made sequentially on a 20-position grid (4 × 5) based on the inion, by means of a single-channel direct current-Superconducting Quantum Interference Device second-order gradiometer. The topographic maps were consistent on the same subjects recorded 2 months apart. The half-field responses produced the strongest signals in the contralateral hemisphere and were consistent with the cruciform model of the calcarine fissure. Right half fields produced upper-left-quadrant outgoing fields and lower-left-quadrant ingoing fields, while the left half field produced the opposite response. The topographic maps also varied with check size, with the larger checks producing positive or negative maximum position more anteriorly than small checks. In addition, with large checks the full-field responses could be explained as the summation of the two half fields, whereas full-field responses to smaller checks were more unpredictable and may be due to sources located at the occipital pole or lateral surface. In addition, dipole sources were located as appropriate with the use of inverse problem solutions. Topographic data will be vital to the clinical use of the visual evoked field but, in addition, provides complementary information to visual evoked potentials, allowing detailed studies of the visual cortex. © 1992 Kluwer Academic Publishers.
Resumo:
Objective of this work was to explore the performance of a recently introduced source extraction method, FSS (Functional Source Separation), in recovering induced oscillatory change responses from extra-cephalic magnetoencephalographic (MEG) signals. Unlike algorithms used to solve the inverse problem, FSS does not make any assumption about the underlying biophysical source model; instead, it makes use of task-related features (functional constraints) to estimate source/s of interest. FSS was compared with blind source separation (BSS) approaches such as Principal and Independent Component Analysis, PCA and ICA, which are not subject to any explicit forward solution or functional constraint, but require source uncorrelatedness (PCA), or independence (ICA). A visual MEG experiment with signals recorded from six subjects viewing a set of static horizontal black/white square-wave grating patterns at different spatial frequencies was analyzed. The beamforming technique Synthetic Aperture Magnetometry (SAM) was applied to localize task-related sources; obtained spatial filters were used to automatically select BSS and FSS components in the spatial area of interest. Source spectral properties were investigated by using Morlet-wavelet time-frequency representations and significant task-induced changes were evaluated by means of a resampling technique; the resulting spectral behaviours in the gamma frequency band of interest (20-70 Hz), as well as the spatial frequency-dependent gamma reactivity, were quantified and compared among methods. Among the tested approaches, only FSS was able to estimate the expected sustained gamma activity enhancement in primary visual cortex, throughout the whole duration of the stimulus presentation for all subjects, and to obtain sources comparable to invasively recorded data.
Resumo:
This work sets out to evaluate the potential benefits and pit-falls in using a priori information to help solve the Magnetoencephalographic (MEG) inverse problem. In chapter one the forward problem in MEG is introduced, together with a scheme that demonstrates how a priori information can be incorporated into the inverse problem. Chapter two contains a literature review of techniques currently used to solve the inverse problem. Emphasis is put on the kind of a priori information that is used by each of these techniques and the ease with which additional constraints can be applied. The formalism of the FOCUSS algorithm is shown to allow for the incorporation of a priori information in an insightful and straightforward manner. In chapter three it is described how anatomical constraints, in the form of a realistically shaped source space, can be extracted from a subject’s Magnetic Resonance Image (MRI). The use of such constraints relies on accurate co-registration of the MEG and MRI co-ordinate systems. Variations of the two main co-registration approaches, based on fiducial markers or on surface matching, are described and the accuracy and robustness of a surface matching algorithm is evaluated. Figures of merit introduced in chapter four are shown to given insight into the limitations of a typical measurement set-up and potential value of a priori information. It is shown in chapter five that constrained dipole fitting and FOCUSS outperform unconstrained dipole fitting when data with low SNR is used. However, the effect of errors in the constraints can reduce this advantage. Finally, it is demonstrated in chapter six that the results of different localisation techniques give corroborative evidence about the location and activation sequence of the human visual cortical areas underlying the first 125ms of the visual magnetic evoked response recorded with a whole head neuromagnetometer.
Resumo:
The work presented in this thesis is divided into two distinct sections. In the first, the functional neuroimaging technique of Magnetoencephalography (MEG) is described and a new technique is introduced for accurate combination of MEG and MRI co-ordinate systems. In the second part of this thesis, MEG and the analysis technique of SAM are used to investigate responses of the visual system in the context of functional specialisation within the visual cortex. In chapter one, the sources of MEG signals are described, followed by a brief description of the necessary instrumentation for accurate MEG recordings. This chapter is concluded by introducing the forward and inverse problems of MEG, techniques to solve the inverse problem, and a comparison of MEG with other neuroimaging techniques. Chapter two provides an important contribution to the field of research with MEG. Firstly, it is described how MEG and MRI co-ordinate systems are combined for localisation and visualisation of activated brain regions. A previously used co-registration methods is then described, and a new technique is introduced. In a series of experiments, it is demonstrated that using fixed fiducial points provides a considerable improvement in the accuracy and reliability of co-registration. Chapter three introduces the visual system starting from the retina and ending with the higher visual rates. The functions of the magnocellular and the parvocellular pathways are described and it is shown how the parallel visual pathways remain segregated throughout the visual system. The structural and functional organisation of the visual cortex is then described. Chapter four presents strong evidence in favour of the link between conscious experience and synchronised brain activity. The spatiotemporal responses of the visual cortex are measured in response to specific gratings. It is shown that stimuli that induce visual discomfort and visual illusions share their physical properties with those that induce highly synchronised gamma frequency oscillations in the primary visual cortex. Finally chapter five is concerned with localization of colour in the visual cortex. In this first ever use of Synthetic Aperture Magnetometry to investigate colour processing in the visual cortex, it is shown that in response to isoluminant chromatic gratings, the highest magnitude of cortical activity arise from area V2.
Resumo:
Methods of solving the neuro-electromagnetic inverse problem are examined and developed, with specific reference to the human visual cortex. The anatomy, physiology and function of the human visual system are first reviewed. Mechanisms by which the visual cortex gives rise to external electric and magnetic fields are then discussed, and the forward problem is described mathematically for the case of an isotropic, piecewise homogeneous volume conductor, and then for an anisotropic, concentric, spherical volume conductor. Methods of solving the inverse problem are reviewed, before a new technique is presented. This technique combines prior anatomical information gained from stereotaxic studies, with a probabilistic distributed-source algorithm to yield accurate, realistic inverse solutions. The solution accuracy is enhanced by using both visual evoked electric and magnetic responses simultaneously. The numerical algorithm is then modified to perform equivalent current dipole fitting and minimum norm estimation, and these three techniques are implemented on a transputer array for fast computation. Due to the linear nature of the techniques, they can be executed on up to 22 transputers with close to linear speedup. The latter part of the thesis describes the application of the inverse methods to the analysis of visual evoked electric and magnetic responses. The CIIm peak of the pattern onset evoked magnetic response is deduced to be a product of current flowing away from the surface areas 17, 18 and 19, while the pattern reversal P100m response originates in the same areas, but from oppositely directed current. Cortical retinotopy is examined using sectorial stimuli, the CI and CIm ;peaks of the pattern onset electric and magnetic responses are found to originate from areas V1 and V2 simultaneously, and they therefore do not conform to a simple cruciform model of primary visual cortex.
Resumo:
The major challenge of MEG, the inverse problem, is to estimate the very weak primary neuronal currents from the measurements of extracranial magnetic fields. The non-uniqueness of this inverse solution is compounded by the fact that MEG signals contain large environmental and physiological noise that further complicates the problem. In this paper, we evaluate the effectiveness of magnetic noise cancellation by synthetic gradiometers and the beamformer analysis method of synthetic aperture magnetometry (SAM) for source localisation in the presence of large stimulus-generated noise. We demonstrate that activation of primary somatosensory cortex can be accurately identified using SAM despite the presence of significant stimulus-related magnetic interference. This interference was generated by a contact heat evoked potential stimulator (CHEPS), recently developed for thermal pain research, but which to date has not been used in a MEG environment. We also show that in a reduced shielding environment the use of higher order synthetic gradiometry is sufficient to obtain signal-to-noise ratios (SNRs) that allow for accurate localisation of cortical sensory function.
Resumo:
An inverse problem is considered where the structure of multiple sound-soft planar obstacles is to be determined given the direction of the incoming acoustic field and knowledge of the corresponding total field on a curve located outside the obstacles. A local uniqueness result is given for this inverse problem suggesting that the reconstruction can be achieved by a single incident wave. A numerical procedure based on the concept of the topological derivative of an associated cost functional is used to produce images of the obstacles. No a priori assumption about the number of obstacles present is needed. Numerical results are included showing that accurate reconstructions can be obtained and that the proposed method is capable of finding both the shapes and the number of obstacles with one or a few incident waves.
Resumo:
A coupled resonator optical waveguide (CROW) bottle is a bottle-shaped nonuniform distribution of resonator and coupling parameters. This Letter solves the inverse problem for a CROW bottle, i.e., develops a simple analytical method that determines a CROW with the required group delay and dispersion characteristics. In particular, the parameters of CROWs exhibiting the group delay with zero dispersion (constant group delay) and constant dispersion (linear group delay) are found. © 2014 Optical Society of America.
Resumo:
This paper describes a method of signal preprocessing under active monitoring. Suppose we want to solve the inverse problem of getting the response of a medium to one powerful signal, which is equivalent to obtaining the transmission function of the medium, but do not have an opportunity to conduct such an experiment (it might be too expensive or harmful for the environment). Practically the problem can be reduced to obtaining the transmission function of the medium. In this case we can conduct a series of experiments of relatively low power and superpose the response signals. However, this method is conjugated with considerable loss of information (especially in the high frequency domain) due to fluctuations of the phase, the frequency and the starting time of each individual experiment. The preprocessing technique presented in this paper allows us to substantially restore the response of the medium and consequently to find a better estimate for the transmission function. This technique is based on expanding the initial signal into the system of orthogonal functions.
Resumo:
A method for measurement and visualization of the complex transmission coefficient of 2-D micro- objects is proposed. The method is based on calculation of the transmission coefficient from the diffraction pattern and the illumination aperture function for monochromatic light. A phase-stepping method was used for diffracted light phase determination.
Resumo:
2000 Mathematics Subject Classification: 12F12, 15A66.
Resumo:
In the paper the identification of the time-dependent blood perfusion coefficient is formulated as an inverse problem. The bio-heat conduction problem is transformed into the classical heat conduction problem. Then the transformed inverse problem is solved using the method of fundamental solutions together with the Tikhonov regularization. Some numerical results are presented in order to demonstrate the accuracy and the stability of the proposed meshless numerical algorithm.
Resumo:
A practical approach to estimate rock thermal conductivities is to use rock models based just on the observed or expected rock mineral content. In this study, we evaluate the performances of the Krischer and Esdorn (KE), Hashin and Shtrikman (HS), classic Maxwell (CM), Maxwell-Wiener (MW), and geometric mean (GM) models in reproducing the measures of thermal conductivity of crystalline rocks.We used 1,105 samples of igneous and metamorphic rocks collected in outcroppings of the Borborema Province, Northeastern Brazil. Both thermal conductivity and petrographic modal analysis (percent volumes of quartz, K-feldspar, plagioclase, and sum of mafic minerals) were done. We divided the rocks into two groups: (a) igneous and ortho-derived (or meta-igneous) rocks and (b) metasedimentary rocks. The group of igneous and ortho-derived rocks (939 samples) covers most the lithologies de_ned in the Streckeisen diagram, with higher concentrations in the fields of granite, granodiorite, and tonalite. In the group of metasedimentary rocks (166 samples), it were sampled representative lithologies, usually of low to medium metamorphic grade. We treat the problem of reproducing the measured values of rock conductivity as an inverse problem where, besides the conductivity measurements, the volume fractions of the constituent minerals are known and the effective conductivities of the constituent minerals and model parameters are unknown. The key idea was to identify the model (and its associated estimates of effective mineral conductivities and parameters) that better reproduces the measures of rock conductivity. We evaluate the model performances by the quantity that is equal to the percentage of number of rock samples which estimated conductivities honor the measured conductivities within the tolerance of 15%. In general, for all models, the performances were quite inferior for the metasedimentary rocks (34% < < 65%) as compared with the igneous and ortho-derived rocks (51% < < 70%). For igneous and ortho-derived rocks, all model performances were very similar ( = 70%), except the GM-model that presented a poor performance (51% < < 65%); the KE and HS-models ( = 70%) were slightly superior than the CM and MW-models ( = 67%). The quartz content is the dominant factor in explaining the rock conductivity for igneous and ortho-derived rocks; in particular, using the MW-model the solution is in practice vi UFRN/CCET– Dissertação de mestrado the series association of the quartz content. On the other hand, for metasedimentary rocks, model performances were different and the performance of the KEmodel ( = 65%) was quite superior than the HS ( = 53%), CM (34% < < 42%), MW ( = 40%), and GM (35% < < 42%). The estimated effective mineral conductivities are stable for perturbations both in the rock conductivity measures and in the quartz volume fraction. The fact that the metasedimentary rocks are richer in platy-minerals explains partially the poor model performances, because both the high thermal anisotropy of biotite (one of the most common platy-mineral) and the difficulty in obtaining polished surfaces for measurement coupling when platyminerals are present. Independently of the rock type, both very low and very high values of rock conductivities are hardly explained by rock models based just on rock mineral content.
Resumo:
The key aspect limiting resolution in crosswell traveltime tomography is illumination, a well known result but not as well exemplified. Resolution in the 2D case is revisited using a simple geometric approach based on the angular aperture distribution and the Radon Transform properties. Analitically it is shown that if an interface has dips contained in the angular aperture limits in all points, it is correctly imaged in the tomogram. By inversion of synthetic data this result is confirmed and it is also evidenced that isolated artifacts might be present when the dip is near the illumination limit. In the inverse sense, however, if an interface is interpretable from a tomogram, even an aproximately horizontal interface, there is no guarantee that it corresponds to a true interface. Similarly, if a body is present in the interwell region it is diffusely imaged in the tomogram, but its interfaces - particularly vertical edges - can not be resolved and additional artifacts might be present. Again, in the inverse sense, there is no guarantee that an isolated anomaly corresponds to a true anomalous body because this anomaly can also be an artifact. Jointly, these results state the dilemma of ill-posed inverse problems: absence of guarantee of correspondence to the true distribution. The limitations due to illumination may not be solved by the use of mathematical constraints. It is shown that crosswell tomograms derived by the use of sparsity constraints, using both Discrete Cosine Transform and Daubechies bases, basically reproduces the same features seen in tomograms obtained with the classic smoothness constraint. Interpretation must be done always taking in consideration the a priori information and the particular limitations due to illumination. An example of interpreting a real data survey in this context is also presented.
Resumo:
The key aspect limiting resolution in crosswell traveltime tomography is illumination, a well known result but not as well exemplified. Resolution in the 2D case is revisited using a simple geometric approach based on the angular aperture distribution and the Radon Transform properties. Analitically it is shown that if an interface has dips contained in the angular aperture limits in all points, it is correctly imaged in the tomogram. By inversion of synthetic data this result is confirmed and it is also evidenced that isolated artifacts might be present when the dip is near the illumination limit. In the inverse sense, however, if an interface is interpretable from a tomogram, even an aproximately horizontal interface, there is no guarantee that it corresponds to a true interface. Similarly, if a body is present in the interwell region it is diffusely imaged in the tomogram, but its interfaces - particularly vertical edges - can not be resolved and additional artifacts might be present. Again, in the inverse sense, there is no guarantee that an isolated anomaly corresponds to a true anomalous body because this anomaly can also be an artifact. Jointly, these results state the dilemma of ill-posed inverse problems: absence of guarantee of correspondence to the true distribution. The limitations due to illumination may not be solved by the use of mathematical constraints. It is shown that crosswell tomograms derived by the use of sparsity constraints, using both Discrete Cosine Transform and Daubechies bases, basically reproduces the same features seen in tomograms obtained with the classic smoothness constraint. Interpretation must be done always taking in consideration the a priori information and the particular limitations due to illumination. An example of interpreting a real data survey in this context is also presented.