936 resultados para III-posed inverse problem


Relevância:

100.00% 100.00%

Publicador:

Resumo:

MEG directly measures the neuronal events and has greater temporal resolution than fMRI, which has limited temporal resolution mainly due to the larger timescale of the hemodynamic response. On the other hand fMRI has advantages in spatial resolution, while the localization results with MEG can be ambiguous due to the non-uniqueness of the electromagnetic inverse problem. Thus, these methods could provide complementary information and could be used to create both spatially and temporally accurate models of brain function. We investigated the degree of overlap, revealed by the two imaging methods, in areas involved in sensory or motor processing in healthy subjects and neurosurgical patients. Furthermore, we used the spatial information from fMRI to construct a spatiotemporal model of the MEG data in order to investigate the sensorimotor system and to create a spatiotemporal model of its function. We compared the localization results from the MEG and fMRI with invasive electrophysiological cortical mapping. We used a recently introduced method, contextual clustering, for hypothesis testing of fMRI data and assessed the the effect of neighbourhood information use on the reproducibility of fMRI results. Using MEG, we identified the ipsilateral primary sensorimotor cortex (SMI) as a novel source area contributing to the somatosensory evoked fields (SEF) to median nerve stimulation. Using combined MEG and fMRI measurements we found that two separate areas in the lateral fissure may be the generators for the SEF responses from the secondary somatosensory cortex region. The two imaging methods indicated activation in corresponding locations. By using complementary information from MEG and fMRI we established a spatiotemporal model of somatosensory cortical processing. This spatiotemporal model of cerebral activity was in good agreement with results from several studies using invasive electrophysiological measurements and with anatomical studies in monkey and man concerning the connections between somatosensory areas. In neurosurgical patients, the MEG dipole model turned out to be more reliable than fMRI in the identification of the central sulcus. This was due to prominent activation in non-primary areas in fMRI, which in some cases led to erroneous or ambiguous localization of the central sulcus.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We explore the application of pseudo time marching schemes, involving either deterministic integration or stochastic filtering, to solve the inverse problem of parameter identification of large dimensional structural systems from partial and noisy measurements of strictly static response. Solutions of such non-linear inverse problems could provide useful local stiffness variations and do not have to confront modeling uncertainties in damping, an important, yet inadequately understood, aspect in dynamic system identification problems. The usual method of least-square solution is through a regularized Gauss-Newton method (GNM) whose results are known to be sensitively dependent on the regularization parameter and data noise intensity. Finite time,recursive integration of the pseudo-dynamical GNM (PD-GNM) update equation addresses the major numerical difficulty associated with the near-zero singular values of the linearized operator and gives results that are not sensitive to the time step of integration. Therefore, we also propose a pseudo-dynamic stochastic filtering approach for the same problem using a parsimonious representation of states and specifically solve the linearized filtering equations through a pseudo-dynamic ensemble Kalman filter (PD-EnKF). For multiple sets of measurements involving various load cases, we expedite the speed of thePD-EnKF by proposing an inner iteration within every time step. Results using the pseudo-dynamic strategy obtained through PD-EnKF and recursive integration are compared with those from the conventional GNM, which prove that the PD-EnKF is the best performer showing little sensitivity to process noise covariance and yielding reconstructions with less artifacts even when the ensemble size is small.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We explore the application of pseudo time marching schemes, involving either deterministic integration or stochastic filtering, to solve the inverse problem of parameter identification of large dimensional structural systems from partial and noisy measurements of strictly static response. Solutions of such non-linear inverse problems could provide useful local stiffness variations and do not have to confront modeling uncertainties in damping, an important, yet inadequately understood, aspect in dynamic system identification problems. The usual method of least-square solution is through a regularized Gauss-Newton method (GNM) whose results are known to be sensitively dependent on the regularization parameter and data noise intensity. Finite time, recursive integration of the pseudo-dynamical GNM (PD-GNM) update equation addresses the major numerical difficulty associated with the near-zero singular values of the linearized operator and gives results that are not sensitive to the time step of integration. Therefore, we also propose a pseudo-dynamic stochastic filtering approach for the same problem using a parsimonious representation of states and specifically solve the linearized filtering equations through apseudo-dynamic ensemble Kalman filter (PD-EnKF). For multiple sets ofmeasurements involving various load cases, we expedite the speed of the PD-EnKF by proposing an inner iteration within every time step. Results using the pseudo-dynamic strategy obtained through PD-EnKF and recursive integration are compared with those from the conventional GNM, which prove that the PD-EnKF is the best performer showing little sensitivity to process noise covariance and yielding reconstructions with less artifacts even when the ensemble size is small. Copyright (C) 2009 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A theory is developed for diffusion-limited charge transfer on a non-fractally rough electrode. The perturbation expressions are obtained for concentration, current density and measured diffusion-limited current for arbitrary one- and two-dimensional surface profiles. The random surface model is employed for a rough electrode\electrolyte interface. In this model the gross geometrical property of an electrochemically active rough surface - the surface structure factor-is related to the average electrode current, current density and concentration. Under short and long time regimes, various morphological features of the rough electrodes, i.e. excess area (related to roughness slope), curvature, correlation length, etc. are related to the (average) current transients. A two-point Pade approximant is used to develop an all time average current expression in terms of partial morphological features of the rough surface. The inverse problem of predicting the surface structure factor from the observed transients is also described. Finally, the effect of surface roughness is studied for specific surface statistics, namely a Gaussian correlation function. It is shown how the surface roughness enhances the overall diffusion-limited charge transfer current.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We explore a pseudodynamic form of the quadratic parameter update equation for diffuse optical tomographic reconstruction from noisy data. A few explicit and implicit strategies for obtaining the parameter updates via a semianalytical integration of the pseudodynamic equations are proposed. Despite the ill-posedness of the inverse problem associated with diffuse optical tomography, adoption of the quadratic update scheme combined with the pseudotime integration appears not only to yield higher convergence, but also a muted sensitivity to the regularization parameters, which include the pseudotime step size for integration. These observations are validated through reconstructions with both numerically generated and experimentally acquired data. (C) 2011 Optical Society of America

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a model for composite beam with embedded de-lamination is developed using the wavelet based spectral finite element (WSFE) method particularly for damage detection using wave propagation analysis. The simulated responses are used as surrogate experimental results for the inverse problem of detection of damage using wavelet filtering. The WSFE technique is very similar to the fast fourier transform (FFT) based spectral finite element (FSFE) except that it uses compactly supported Daubechies scaling function approximation in time. Unlike FSFE formulation with periodicity assumption, the wavelet-based method allows imposition of initial values and thus is free from wrap around problems. This helps in analysis of finite length undamped structures, where the FSFE method fails to simulate accurate response. First, numerical experiments are performed to study the effect of de-lamination on the wave propagation characteristics. The responses are simulated for different de-lamination configurations for both broad-band and narrow-band excitations. Next, simulated responses are used for damage detection using wavelet analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We address a certain inverse problem in ultrasound-modulated optical tomography: the recovery of the amplitude of vibration of scatterers [p(r)] in the ultrasound focal volume in a diffusive object from boundary measurement of the modulation depth (M) of the amplitude autocorrelation of light [phi(r, tau)] traversing through it. Since M is dependent on the stiffness of the material, this is the precursor to elasticity imaging. The propagation of phi(r, tau) is described by a diffusion equation from which we have derived a nonlinear perturbation equation connecting p(r) and refractive index modulation [Delta n(r)] in the region of interest to M measured on the boundary. The nonlinear perturbation equation and its approximate linear counterpart are solved for the recovery of p(r). The numerical results reveal regions of different stiffness, proving that the present method recovers p(r) with reasonable quantitative accuracy and spatial resolution. (C) 2011 Optical Society of America

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Diffuse optical tomography (DOT) using near-infrared (NIR) light is a promising tool for noninvasive imaging of deep tissue. This technique is capable of quantitative reconstructions of absorption coefficient inhomogeneities of tissue. The motivation for reconstructing the optical property variation is that it, and, in particular, the absorption coefficient variation, can be used to diagnose different metabolic and disease states of tissue. In DOT, like any other medical imaging modality, the aim is to produce a reconstruction with good spatial resolution and accuracy from noisy measurements. We study the performance of a phase array system for detection of optical inhomogeneities in tissue. The light transport through a tissue is diffusive in nature and can be modeled using diffusion equation if the optical parameters of the inhomogeneity are close to the optical properties of the background. The amplitude cancellation method that uses dual out-of-phase sources (phase array) can detect and locate small objects in turbid medium. The inverse problem is solved using model based iterative image reconstruction. Diffusion equation is solved using finite element method for providing the forward model for photon transport. The solution of the forward problem is used for computing the Jacobian and the simultaneous equation is solved using conjugate gradient search. The simulation studies have been carried out and the results show that a phase array system can resolve inhomogeneities with sizes of 5 mm when the absorption coefficient of the inhomogeneity is twice that of the background tissue. To validate this result, a prototype model for performing a dual-source system has been developed. Experiments are carried out by inserting an inhomogeneity of high optical absorption coefficient in an otherwise homogeneous phantom while keeping the scattering coefficient same. The high frequency (100 MHz) modulated dual out-of-phase laser source light is propagated through the phantom. The interference of these sources creates an amplitude null and a phase shift of 180° along a plane between the two sources with a homogeneous object. A solid resin phantom with inhomogeneities simulating the tumor is used in our experiment. The amplitude and phase changes are found to be disturbed by the presence of the inhomogeneity in the object. The experimental data (amplitude and the phase measured at the detector) are used for reconstruction. The results show that the method is able to detect multiple inhomogeneities with sizes of 4 mm. The localization error for a 5 mm inhomogeneity is found to be approximately 1 mm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: The authors aim at developing a pseudo-time, sub-optimal stochastic filtering approach based on a derivative free variant of the ensemble Kalman filter (EnKF) for solving the inverse problem of diffuse optical tomography (DOT) while making use of a shape based reconstruction strategy that enables representing a cross section of an inhomogeneous tumor boundary by a general closed curve. Methods: The optical parameter fields to be recovered are approximated via an expansion based on the circular harmonics (CH) (Fourier basis functions) and the EnKF is used to recover the coefficients in the expansion with both simulated and experimentally obtained photon fluence data on phantoms with inhomogeneous inclusions. The process and measurement equations in the pseudo-dynamic EnKF (PD-EnKF) presently yield a parsimonious representation of the filter variables, which consist of only the Fourier coefficients and the constant scalar parameter value within the inclusion. Using fictitious, low-intensity Wiener noise processes in suitably constructed ``measurement'' equations, the filter variables are treated as pseudo-stochastic processes so that their recovery within a stochastic filtering framework is made possible. Results: In our numerical simulations, we have considered both elliptical inclusions (two inhomogeneities) and those with more complex shapes (such as an annular ring and a dumbbell) in 2-D objects which are cross-sections of a cylinder with background absorption and (reduced) scattering coefficient chosen as mu(b)(a)=0.01mm(-1) and mu('b)(s)=1.0mm(-1), respectively. We also assume mu(a) = 0.02 mm(-1) within the inhomogeneity (for the single inhomogeneity case) and mu(a) = 0.02 and 0.03 mm(-1) (for the two inhomogeneities case). The reconstruction results by the PD-EnKF are shown to be consistently superior to those through a deterministic and explicitly regularized Gauss-Newton algorithm. We have also estimated the unknown mu(a) from experimentally gathered fluence data and verified the reconstruction by matching the experimental data with the computed one. Conclusions: The PD-EnKF, which exhibits little sensitivity against variations in the fictitiously introduced noise processes, is also proven to be accurate and robust in recovering a spatial map of the absorption coefficient from DOT data. With the help of shape based representation of the inhomogeneities and an appropriate scaling of the CH expansion coefficients representing the boundary, we have been able to recover inhomogeneities representative of the shape of malignancies in medical diagnostic imaging. (C) 2012 American Association of Physicists in Medicine. [DOI: 10.1118/1.3679855]

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The diffusion equation-based modeling of near infrared light propagation in tissue is achieved by using finite-element mesh for imaging real-tissue types, such as breast and brain. The finite-element mesh size (number of nodes) dictates the parameter space in the optical tomographic imaging. Most commonly used finite-element meshing algorithms do not provide the flexibility of distinct nodal spacing in different regions of imaging domain to take the sensitivity of the problem into consideration. This study aims to present a computationally efficient mesh simplification method that can be used as a preprocessing step to iterative image reconstruction, where the finite-element mesh is simplified by using an edge collapsing algorithm to reduce the parameter space at regions where the sensitivity of the problem is relatively low. It is shown, using simulations and experimental phantom data for simple meshes/domains, that a significant reduction in parameter space could be achieved without compromising on the reconstructed image quality. The maximum errors observed by using the simplified meshes were less than 0.27% in the forward problem and 5% for inverse problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditional image reconstruction methods in rapid dynamic diffuse optical tomography employ l(2)-norm-based regularization, which is known to remove the high-frequency components in the reconstructed images and make them appear smooth. The contrast recovery in these type of methods is typically dependent on the iterative nature of method employed, where the nonlinear iterative technique is known to perform better in comparison to linear techniques (noniterative) with a caveat that nonlinear techniques are computationally complex. Assuming that there is a linear dependency of solution between successive frames resulted in a linear inverse problem. This new framework with the combination of l(1)-norm based regularization can provide better robustness to noise and provide better contrast recovery compared to conventional l(2)-based techniques. Moreover, it is shown that the proposed l(1)-based technique is computationally efficient compared to its counterpart (l(2)-based one). The proposed framework requires a reasonably close estimate of the actual solution for the initial frame, and any suboptimal estimate leads to erroneous reconstruction results for the subsequent frames.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The inverse problem in photoacoustic tomography (PAT) seeks to obtain the absorbed energy map from the boundary pressure measurements for which computationally intensive iterative algorithms exist. The computational challenge is heightened when the reconstruction is done using boundary data split into its frequency spectrum to improve source localization and conditioning of the inverse problem. The key idea of this work is to modify the update equation wherein the Jacobian and the perturbation in data are summed over all wave numbers, k, and inverted only once to recover the absorbed energy map. This leads to a considerable reduction in the overall computation time. The results obtained using simulated data, demonstrates the efficiency of the proposed scheme without compromising the accuracy of reconstruction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, the free vibration of a non-uniform free-free Euler-Bernoulli beam is studied using an inverse problem approach. It is found that the fourth-order governing differential equation for such beams possess a fundamental closed-form solution for certain polynomial variations of the mass and stiffness. An infinite number of non-uniform free-free beams exist, with different mass and stiffness variations, but sharing the same fundamental frequency. A detailed study is conducted for linear, quadratic and cubic variations of mass, and on how to pre-select the internal nodes such that the closed-form solutions exist for the three cases. A special case is also considered where, at the internal nodes, external elastic constraints are present. The derived results are provided as benchmark solutions for the validation of non-uniform free-free beam numerical codes. (C) 2013 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Synfire waves are propagating spike packets in synfire chains, which are feedforward chains embedded in random networks. Although synfire waves have proved to be effective quantification for network activity with clear relations to network structure, their utilities are largely limited to feedforward networks with low background activity. To overcome these shortcomings, we describe a novel generalisation of synfire waves, and define `synconset wave' as a cascade of first spikes within a synchronisation event. Synconset waves would occur in `synconset chains', which are feedforward chains embedded in possibly heavily recurrent networks with heavy background activity. We probed the utility of synconset waves using simulation of single compartment neuron network models with biophysically realistic conductances, and demonstrated that the spread of synconset waves directly follows from the network connectivity matrix and is modulated by top-down inputs and the resultant oscillations. Such synconset profiles lend intuitive insights into network organisation in terms of connection probabilities between various network regions rather than an adjacency matrix. To test this intuition, we develop a Bayesian likelihood function that quantifies the probability that an observed synfire wave was caused by a given network. Further, we demonstrate it's utility in the inverse problem of identifying the network that caused a given synfire wave. This method was effective even in highly subsampled networks where only a small subset of neurons were accessible, thus showing it's utility in experimental estimation of connectomes in real neuronal-networks. Together, we propose synconset chains/waves as an effective framework for understanding the impact of network structure on function, and as a step towards developing physiology-driven network identification methods. Finally, as synconset chains extend the utilities of synfire chains to arbitrary networks, we suggest utilities of our framework to several aspects of network physiology including cell assemblies, population codes, and oscillatory synchrony.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The sparse estimation methods that utilize the l(p)-norm, with p being between 0 and 1, have shown better utility in providing optimal solutions to the inverse problem in diffuse optical tomography. These l(p)-norm-based regularizations make the optimization function nonconvex, and algorithms that implement l(p)-norm minimization utilize approximations to the original l(p)-norm function. In this work, three such typical methods for implementing the l(p)-norm were considered, namely, iteratively reweighted l(1)-minimization (IRL1), iteratively reweighted least squares (IRLS), and the iteratively thresholding method (ITM). These methods were deployed for performing diffuse optical tomographic image reconstruction, and a systematic comparison with the help of three numerical and gelatin phantom cases was executed. The results indicate that these three methods in the implementation of l(p)-minimization yields similar results, with IRL1 fairing marginally in cases considered here in terms of shape recovery and quantitative accuracy of the reconstructed diffuse optical tomographic images. (C) 2014 Optical Society of America