880 resultados para Fourier-space Weighting


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we present a neural network (NN) based method designed for 3D rigid-body registration of FMRI time series, which relies on a limited number of Fourier coefficients of the images to be aligned. These coefficients, which are comprised in a small cubic neighborhood located at the first octant of a 3D Fourier space (including the DC component), are then fed into six NN during the learning stage. Each NN yields the estimates of a registration parameter. The proposed method was assessed for 3D rigid-body transformations, using DC neighborhoods of different sizes. The mean absolute registration errors are of approximately 0.030 mm in translations and 0.030 deg in rotations, for the typical motion amplitudes encountered in FMRI studies. The construction of the training set and the learning stage are fast requiring, respectively, 90 s and 1 to 12 s, depending on the number of input and hidden units of the NN. We believe that NN-based approaches to the problem of FMRI registration can be of great interest in the future. For instance, NN relying on limited K-space data (possibly in navigation echoes) can be a valid solution to the problem of prospective (in frame) FMRI registration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of different kinds of nonlinear filtering in a joint transform correlator are studied and compared. The study is divided into two parts, one corresponding to object space and the second to the Fourier domain of the joint power spectrum. In the first part, phase and inverse filters are computed; their inverse Fourier transforms are also computed, thereby becoming the reference in the object space. In the Fourier space, the binarization of the power spectrum is realized and compared with a new procedure for removing the spatial envelope. All cases are simulated and experimentally implemented by a compact joint transform correlator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Subtractive imaging in confocal fluorescence light microscopy is based on the subtraction of a suitably weighted widefield image from a confocal image. An approximation to a widefield image can be obtained by detection with an opened confocal pinhole. The subtraction of images enhances the resolution in-plane as well as along the optic axis. Due to the linearity of the approach, the effect of subtractive imaging in Fourier-space corresponds to a reduction of low spatial frequency contributions leading to a relative enhancement of the high frequencies. Along the direction of the optic axis this also results in an improved sectioning. Image processing can achieve a similar effect. However, a 3D volume dataset must be acquired and processed, yielding a result essentially identical to subtractive imaging but superior in signal-to-noise ratio. The latter can be increased further with the technique of weighted averaging in Fourier-space. A comparison of 2D and 3D experimental data analysed with subtractive imaging, the equivalent Fourier-space processing of the confocal data only, and Fourier-space weighted averaging is presented. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper makes two points. First, we show that the line-of-sight solution to cosmic microwave anisotropies in Fourier space, even though formally defined for arbitrarily large wavelengths, leads to position-space solutions which only depend on the sources of anisotropies inside the past light cone of the observer. This foretold manifestation of causality in position (real) space happens order by order in a series expansion in powers of the visibility gamma = e(-mu), where mu is the optical depth to Thomson scattering. We show that the contributions of order gamma(N) to the cosmic microwave background (CMB) anisotropies are regulated by spacetime window functions which have support only inside the past light cone of the point of observation. Second, we show that the Fourier-Bessel expansion of the physical fields (including the temperature and polarization momenta) is an alternative to the usual Fourier basis as a framework to compute the anisotropies. The viability of the Fourier-Bessel series for treating the CMB is a consequence of the fact that the visibility function becomes exponentially small at redshifts z >> 10(3), effectively cutting off the past light cone and introducing a finite radius inside which initial conditions can affect physical observables measured at our position (x) over right arrow = 0 and time t(0). Hence, for each multipole l there is a discrete tower of momenta k(il) (not a continuum) which can affect physical observables, with the smallest momenta being k(1l) similar to l. The Fourier-Bessel modes take into account precisely the information from the sources of anisotropies that propagates from the initial value surface to the point of observation-no more, no less. We also show that the physical observables (the temperature and polarization maps), and hence the angular power spectra, are unaffected by that choice of basis. This implies that the Fourier-Bessel expansion is the optimal scheme with which one can compute CMB anisotropies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present an open-source ITK implementation of a directFourier method for tomographic reconstruction, applicableto parallel-beam x-ray images. Direct Fourierreconstruction makes use of the central-slice theorem tobuild a polar 2D Fourier space from the 1D transformedprojections of the scanned object, that is resampled intoa Cartesian grid. Inverse 2D Fourier transform eventuallyyields the reconstructed image. Additionally, we providea complex wrapper to the BSplineInterpolateImageFunctionto overcome ITKâeuro?s current lack for image interpolatorsdealing with complex data types. A sample application ispresented and extensively illustrated on the Shepp-Loganhead phantom. We show that appropriate input zeropaddingand 2D-DFT oversampling rates together with radial cubicb-spline interpolation improve 2D-DFT interpolationquality and are efficient remedies to reducereconstruction artifacts.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aim of this thesis was to further extend the applicability of the Fourier-transform (FT) rheology technique especially for non-linear mechanical characterisation of polymeric materials on the one hand and to investigated the influence of the degree of branching on the linear and non-linear relaxation behaviour of polymeric materials on the other hand. The latter was achieved by employing in particular FT-rheology and other rheological techniques to variously branched polymer melts and solutions. For these purposes, narrowly distributed linear and star-shaped polystyrene and polybutadiene homo-polymers with varying molecular weights were anionically synthesised using both high-vacuum and inert atmosphere techniques. Furthermore, differently entangled solutions of linear and star-shaped polystyrenes in di-sec-octyl phthalate (DOP) were prepared. The several linear polystyrene solutions were measured under large amplitude oscillatory shear (LAOS) conditions and the non-linear torque response was analysed in the Fourier space. Experimental results were compared with numerical predictions performed by Dr. B. Debbaut using a multi-mode differential viscoelastic fluid model obeying the Giesekus constitutive equation. Apart from the analysis of the relative intensities of the harmonics, a detailed examination of the phase information content was developed. Further on, FT-rheology allowed to distinguish polystyrene melts and solutions due to their different topologies where other rheological measurements failed. Significant differences occurred under LAOS conditions as particularly reflected in the phase difference of the third harmonic, Ħ3, which could be related to shear thinning and shear thickening behaviour.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We develop an algorithm to simulate a Gaussian stochastic process that is non-¿-correlated in both space and time coordinates. The colored noise obeys a linear reaction-diffusion Langevin equation with Gaussian white noise. This equation is exactly simulated in a discrete Fourier space.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Four problems of physical interest have been solved in this thesis using the path integral formalism. Using the trigonometric expansion method of Burton and de Borde (1955), we found the kernel for two interacting one dimensional oscillators• The result is the same as one would obtain using a normal coordinate transformation, We next introduced the method of Papadopolous (1969), which is a systematic perturbation type method specifically geared to finding the partition function Z, or equivalently, the Helmholtz free energy F, of a system of interacting oscillators. We applied this method to the next three problems considered• First, by summing the perturbation expansion, we found F for a system of N interacting Einstein oscillators^ The result obtained is the same as the usual result obtained by Shukla and Muller (1972) • Next, we found F to 0(Xi)f where A is the usual Tan Hove ordering parameter* The results obtained are the same as those of Shukla and Oowley (1971), who have used a diagrammatic procedure, and did the necessary sums in Fourier space* We performed the work in temperature space• Finally, slightly modifying the method of Papadopolous, we found the finite temperature expressions for the Debyecaller factor in Bravais lattices, to 0(AZ) and u(/K/ j,where K is the scattering vector* The high temperature limit of the expressions obtained here, are in complete agreement with the classical results of Maradudin and Flinn (1963) .

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We study the degree to which Kraichnan–Leith–Batchelor (KLB) phenomenology describes two-dimensional energy cascades in α turbulence, governed by ∂θ/∂t+J(ψ,θ)=ν∇2θ+f, where θ=(−Δ)α/2ψ is generalized vorticity, and ψ^(k)=k−αθ^(k) in Fourier space. These models differ in spectral non-locality, and include surface quasigeostrophic flow (α=1), regular two-dimensional flow (α=2) and rotating shallow flow (α=3), which is the isotropic limit of a mantle convection model. We re-examine arguments for dual inverse energy and direct enstrophy cascades, including Fjørtoft analysis, which we extend to general α, and point out their limitations. Using an α-dependent eddy-damped quasinormal Markovian (EDQNM) closure, we seek self-similar inertial range solutions and study their characteristics. Our present focus is not on coherent structures, which the EDQNM filters out, but on any self-similar and approximately Gaussian turbulent component that may exist in the flow and be described by KLB phenomenology. For this, the EDQNM is an appropriate tool. Non-local triads contribute increasingly to the energy flux as α increases. More importantly, the energy cascade is downscale in the self-similar inertial range for 2.5<α<10. At α=2.5 and α=10, the KLB spectra correspond, respectively, to enstrophy and energy equipartition, and the triad energy transfers and flux vanish identically. Eddy turnover time and strain rate arguments suggest the inverse energy cascade should obey KLB phenomenology and be self-similar for α<4. However, downscale energy flux in the EDQNM self-similar inertial range for α>2.5 leads us to predict that any inverse cascade for α≥2.5 will not exhibit KLB phenomenology, and specifically the KLB energy spectrum. Numerical simulations confirm this: the inverse cascade energy spectrum for α≥2.5 is significantly steeper than the KLB prediction, while for α<2.5 we obtain the KLB spectrum.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We investigated drift-wave turbulence in the plasma edge of a small tokamak by considering solutions of the Hasegawa-Mima equation involving three interacting modes in Fourier space. The resulting low-dimensional dynamics presented periodic as well as chaotic evolution of the Fourier-mode amplitudes, and we performed the control of chaotic behaviour through the application of a fourth resonant wave of small amplitude.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Visual perception and action are strongly linked with parallel processing channels connecting the retina, the lateral geniculate nucleus, and the input layers of the primary visual cortex. Achromatic vision is provided by at least two of such channels formed by the M and P neurons. These cell pathways are similarly organized in primates having different lifestyles, including species that are diurnal, nocturnal, and which exhibit a variety of color vision phenotypes. We describe the M and P cell properties by 3D Gábor functions and their 3D Fourier transform. The M and P cells occupy different loci in the Gábor information diagram or Fourier Space. This separation allows the M and P pathways to transmit visual signals with distinct 6D joint entropy for space, spatial frequency, time, and temporal frequency. By combining the M and P impacts on the cortical neurons beyond V1 input layers, the cortical pathways are able to process aspects of visual stimuli with a better precision than it would be possible using the M or P pathway alone. This performance fulfils the requirements of different behavioral tasks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We study a model of fast magnetic reconnection in the presence of weak turbulence proposed by Lazarian and Vishniac (1999) using three-dimensional direct numerical simulations. The model has been already successfully tested in Kowal et al. (2009) confirming the dependencies of the reconnection speed V-rec on the turbulence injection power P-inj and the injection scale l(inj) expressed by a constraint V-rec similar to P(inj)(1/2)l(inj)(3/4)and no observed dependency on Ohmic resistivity. In Kowal et al. (2009), in order to drive turbulence, we injected velocity fluctuations in Fourier space with frequencies concentrated around k(inj) = 1/l(inj), as described in Alvelius (1999). In this paper, we extend our previous studies by comparing fast magnetic reconnection under different mechanisms of turbulence injection by introducing a new way of turbulence driving. The new method injects velocity or magnetic eddies with a specified amplitude and scale in random locations directly in real space. We provide exact relations between the eddy parameters and turbulent power and injection scale. We performed simulations with new forcing in order to study turbulent power and injection scale dependencies. The results show no discrepancy between models with two different methods of turbulence driving exposing the same scalings in both cases. This is in agreement with the Lazarian and Vishniac (1999) predictions. In addition, we performed a series of models with varying viscosity nu. Although Lazarian and Vishniac (1999) do not provide any prediction for this dependence, we report a weak relation between the reconnection speed with viscosity, V-rec similar to nu(-1/4).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Starting from the Fisher matrix for counts in cells, we derive the full Fisher matrix for surveys of multiple tracers of large-scale structure. The key step is the classical approximation, which allows us to write the inverse of the covariance of the galaxy counts in terms of the naive matrix inverse of the covariance in a mixed position-space and Fourier-space basis. We then compute the Fisher matrix for the power spectrum in bins of the 3D wavenumber , the Fisher matrix for functions of position (or redshift z) such as the linear bias of the tracers and/or the growth function and the cross-terms of the Fisher matrix that expresses the correlations between estimations of the power spectrum and estimations of the bias. When the bias and growth function are fully specified, and the Fourier-space bins are large enough that the covariance between them can be neglected, the Fisher matrix for the power spectrum reduces to the widely used result that was first derived by Feldman, Kaiser & Peacock. Assuming isotropy, a fully analytical calculation of the Fisher matrix in the classical approximation can be performed in the case of a constant-density, volume-limited survey.