968 resultados para modulation transfer function


Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE Positron emission tomography (PET)∕computed tomography (CT) measurements on small lesions are impaired by the partial volume effect, which is intrinsically tied to the point spread function of the actual imaging system, including the reconstruction algorithms. The variability resulting from different point spread functions hinders the assessment of quantitative measurements in clinical routine and especially degrades comparability within multicenter trials. To improve quantitative comparability there is a need for methods to match different PET∕CT systems through elimination of this systemic variability. Consequently, a new method was developed and tested that transforms the image of an object as produced by one tomograph to another image of the same object as it would have been seen by a different tomograph. The proposed new method, termed Transconvolution, compensates for differing imaging properties of different tomographs and particularly aims at quantitative comparability of PET∕CT in the context of multicenter trials. METHODS To solve the problem of image normalization, the theory of Transconvolution was mathematically established together with new methods to handle point spread functions of different PET∕CT systems. Knowing the point spread functions of two different imaging systems allows determining a Transconvolution function to convert one image into the other. This function is calculated by convolving one point spread function with the inverse of the other point spread function which, when adhering to certain boundary conditions such as the use of linear acquisition and image reconstruction methods, is a numerically accessible operation. For reliable measurement of such point spread functions characterizing different PET∕CT systems, a dedicated solid-state phantom incorporating (68)Ge∕(68)Ga filled spheres was developed. To iteratively determine and represent such point spread functions, exponential density functions in combination with a Gaussian distribution were introduced. Furthermore, simulation of a virtual PET system provided a standard imaging system with clearly defined properties to which the real PET systems were to be matched. A Hann window served as the modulation transfer function for the virtual PET. The Hann's apodization properties suppressed high spatial frequencies above a certain critical frequency, thereby fulfilling the above-mentioned boundary conditions. The determined point spread functions were subsequently used by the novel Transconvolution algorithm to match different PET∕CT systems onto the virtual PET system. Finally, the theoretically elaborated Transconvolution method was validated transforming phantom images acquired on two different PET systems to nearly identical data sets, as they would be imaged by the virtual PET system. RESULTS The proposed Transconvolution method matched different PET∕CT-systems for an improved and reproducible determination of a normalized activity concentration. The highest difference in measured activity concentration between the two different PET systems of 18.2% was found in spheres of 2 ml volume. Transconvolution reduced this difference down to 1.6%. In addition to reestablishing comparability the new method with its parameterization of point spread functions allowed a full characterization of imaging properties of the examined tomographs. CONCLUSIONS By matching different tomographs to a virtual standardized imaging system, Transconvolution opens a new comprehensive method for cross calibration in quantitative PET imaging. The use of a virtual PET system restores comparability between data sets from different PET systems by exerting a common, reproducible, and defined partial volume effect.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Separate physiological mechanisms which respond to spatial and temporal stimulation have been identified in the visual system. Some pathological conditions may selectively affect these mechanisms, offering a unique opportunity to investigate how psychophysical and electrophysiological tests reflect these visual processes, and thus enhance the use of the tests in clinical diagnosis. Amblyopia and optical blur were studied, representing spatial visual defects of neural and optical origin, respectively. Selective defects of the visual pathways were also studied - optic neuritis which affects the optic nerve, and dementia of the Alzheimer type in which the higher association areas are believed to be affected, but the primary projections spared. Seventy control subjects from 10 to 79 years of age were investigated. This provided material for an additional study of the effect of age on the psychophysical and electrophysiological responses. Spatial processing was measured by visual acuity, the contrast sensitivity function, or spatial modulation transfer function (MTF), and the pattern reversal and pattern onset-offset visual evoked potential (VEP). Temporal, or luminance, processing was measured by the de Lange curve, or temporal MTF, and the flash VEP. The pattern VEP was shown to reflect the integrity of the optic nerve, geniculo striate pathway and primary projections, and was related to high temporal frequency processing. The individual components of the flash VEP differed in their characteristics. The results suggested that the P2 component reflects the function of the higher association areas and is related to low temporal frequency processing, while the Pl component reflects the primary projection areas. The combination of a delayed flash P2 component and a normal latency pattern VEP appears to be specific to dementia of the Alzheimer type and represents an important diagnostic test for this condition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis studied the effect of (i) the number of grating components and (ii) parameter randomisation on root-mean-square (r.m.s.) contrast sensitivity and spatial integration. The effectiveness of spatial integration without external spatial noise depended on the number of equally spaced orientation components in the sum of gratings. The critical area marking the saturation of spatial integration was found to decrease when the number of components increased from 1 to 5-6 but increased again at 8-16 components. The critical area behaved similarly as a function of the number of grating components when stimuli consisted of 3, 6 or 16 components with different orientations and/or phases embedded in spatial noise. Spatial integration seemed to depend on the global Fourier structure of the stimulus. Spatial integration was similar for sums of two vertical cosine or sine gratings with various Michelson contrasts in noise. The critical area for a grating sum was found to be a sum of logarithmic critical areas for the component gratings weighted by their relative Michelson contrasts. The human visual system was modelled as a simple image processor where the visual stimuli is first low-pass filtered by the optical modulation transfer function of the human eye and secondly high-pass filtered, up to the spatial cut-off frequency determined by the lowest neural sampling density, by the neural modulation transfer function of the visual pathways. The internal noise is then added before signal interpretation occurs in the brain. The detection is mediated by a local spatially windowed matched filter. The model was extended to include complex stimuli and its applicability to the data was found to be successful. The shape of spatial integration function was similar for non-randomised and randomised simple and complex gratings. However, orientation and/or phase randomised reduced r.m.s contrast sensitivity by a factor of 2. The effect of parameter randomisation on spatial integration was modelled under the assumption that human observers change the observer strategy from cross-correlation (i.e., a matched filter) to auto-correlation detection when uncertainty is introduced to the task. The model described the data accurately.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The locus of origin of the pattern evoked electroretinogram, (PERG), has been the subject of considerable discussion. A novel approach was adopted in this study to further elaborate the nature of the PERG evoked by pattern onset/offset presentation. The PERG was found to be linearly related to stimulus contrast and in particular was linearly related to the temporal contrast of the retinal image, when elicited by patterns of low spatial frequency. At high spatial frequencies the retinal image contrast is significantly reduced because of optical degradation. This is described by the eye's modulation transfer function (MTF). The retinal contrast of square wave grating and chequerboard patterns of increasing spatial frequency were found by filtering their Fourier transforms by the MTF. The filtered pattern harmonics were then resynthesised to constitute a profile of retinal image illuminance from which the temporal and spatial contrast of the image could be calculated. If the PERG is a pure illuminance response it should be spatially insensitive and dependent upon the temporal contrast of stimulation. The calculated loss of temporal contrast for finer patterns was expressed as a space-averaged temporal contrast attentuation factor. This factor, applied to PERGs evoked by low spatial frequency patterns, was used to predict the retinal illuminance response elicited by a finer pattern. The predicted response was subtracted from the recorded signal and residual waveform was proposed to represent specific activity. An additional correction for the attenuation of spatial contrast was applied to the extracted pattern specific response. Pattern specific responses computed for different spatial frequency patterns in this way are the predicted result of iso-contrast pattern stimulation. The pattern specific responses demonstrate a striking bandpass spatial selectivity which peaks at higher spatial frequencies in the more central retina. The variation of spatial sensitivity with eccentricity corresponds closely with estimated ganglion receptive field centre separation and psychophysical data. The variation of retinal structure with eccentricity, in the form of the volumes of the nuclear layers, was compared with the amplitudes of the computed retinal illuminance and pattern specific responses. The retinal illuminance response corresponds more closely to the outer and inner nuclear layers whilst the pattern specific response appears more closely related to the ganglion cell layer. In general the negative response transients correspond to the more proximal retinal layers. This thesis therefore supports the proposed contribution of proximal retinal cell activity to the PERG and describes techniques which may be further elaborated for more detailed studies of retinal receptive field dimensions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optical coherence tomography (OCT) is a non-invasive three-dimensional imaging system that is capable of producing high resolution in-vivo images. OCT is approved for use in clinical trials in Japan, USA and Europe. For OCT to be used effectively in a clinical diagnosis, a method of standardisation is required to assess the performance across different systems. This standardisation can be implemented using highly accurate and reproducible artefacts for calibration at both installation and throughout the lifetime of a system. Femtosecond lasers can write highly reproducible and highly localised micro-structured calibration artefacts within a transparent media. We report on the fabrication of high quality OCT calibration artefacts in fused silica using a femtosecond laser. The calibration artefacts were written in fused silica due to its high purity and ability to withstand high energy femtosecond pulses. An Amplitude Systemes s-Pulse Yb:YAG femtosecond laser with an operating wavelength of 1026 nm was used to inscribe three dimensional patterns within the highly optically transmissive substrate. Four unique artefacts have been designed to measure a wide variety of parameters, including the points spread function (PSF), modulation transfer function (MTF), sensitivity, distortion and resolution - key parameters which define the performance of the OCT. The calibration artefacts have been characterised using an optical microscope and tested on a swept source OCT. The results demonstrate that the femtosecond laser inscribed artefacts have the potential of quantitatively and qualitatively validating the performance of any OCT system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We examined the intrinsic signals in response to grating stimuli in order to determine whether the light-evoked intrinsic signals of the retina are due to changes in the photoreceptor activities induced by the image projected on to the retina or are due to neural activities of the inner retina. The retinas of the left eye of 12 cats under general anesthesia were examined by a functional imaging fundus camera. Near infrared light was used to monitor the reflectance changes (RCs) of the retina. Vertical grating were used to stimulate the retina at 4 Hz. The spatial frequencies of the gratings were 0.05, 0.11, 0.22, 0.43, 0.86, 1.73, and 3.46 cycles/degree (cpd). Ten images were averaged and used to analyze the RCs to obtain the peak value (PV) of a two dimensional fast Fourier transfer of the RCs. The wavefront aberrations (WA) were measured with a compact wavefront aberrometer and the spatial modulation transfer function (MTF) of the eye was calculated. The retinal reflectance image had a grating pattern. The PV of the spatial sensitivity curve was highest at low spatial frequencies (0.05 and 0.11 cpd), and the sensitivity decreased steeply with an increase in the spatial frequency. RCs were not detectable at 3.46 cpd. The MTF decreased gradually with increases in the spatial frequencies and was 0.68 at 3.46 cpd. The reflectance pattern of the retinal intrinsic signal elicited by grating stimuli of different spatial frequencies was different from that of the MTF. This suggests that the intrinsic signal represents not only the response of the photoreceptors but also other neuronal or vascular changes in the retina.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the progress of computer technology, computers are expected to be more intelligent in the interaction with humans, presenting information according to the user's psychological and physiological characteristics. However, computer users with visual problems may encounter difficulties on the perception of icons, menus, and other graphical information displayed on the screen, limiting the efficiency of their interaction with computers. In this dissertation, a personalized and dynamic image precompensation method was developed to improve the visual performance of the computer users with ocular aberrations. The precompensation was applied on the graphical targets before presenting them on the screen, aiming to counteract the visual blurring caused by the ocular aberration of the user's eye. A complete and systematic modeling approach to describe the retinal image formation of the computer user was presented, taking advantage of modeling tools, such as Zernike polynomials, wavefront aberration, Point Spread Function and Modulation Transfer Function. The ocular aberration of the computer user was originally measured by a wavefront aberrometer, as a reference for the precompensation model. The dynamic precompensation was generated based on the resized aberration, with the real-time pupil diameter monitored. The potential visual benefit of the dynamic precompensation method was explored through software simulation, with the aberration data from a real human subject. An "artificial eye'' experiment was conducted by simulating the human eye with a high-definition camera, providing objective evaluation to the image quality after precompensation. In addition, an empirical evaluation with 20 human participants was also designed and implemented, involving image recognition tests performed under a more realistic viewing environment of computer use. The statistical analysis results of the empirical experiment confirmed the effectiveness of the dynamic precompensation method, by showing significant improvement on the recognition accuracy. The merit and necessity of the dynamic precompensation were also substantiated by comparing it with the static precompensation. The visual benefit of the dynamic precompensation was further confirmed by the subjective assessments collected from the evaluation participants.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent advances in mobile phone cameras have poised them to take over compact hand-held cameras as the consumer’s preferred camera option. Along with advances in the number of pixels, motion blur removal, face-tracking, and noise reduction algorithms have significant roles in the internal processing of the devices. An undesired effect of severe noise reduction is the loss of texture (i.e. low-contrast fine details) of the original scene. Current established methods for resolution measurement fail to accurately portray the texture loss incurred in a camera system. The development of an accurate objective method to identify the texture preservation or texture reproduction capability of a camera device is important in this regard. The ‘Dead Leaves’ target has been used extensively as a method to measure the modulation transfer function (MTF) of cameras that employ highly non-linear noise-reduction methods. This stochastic model consists of a series of overlapping circles with radii r distributed as r−3, and having uniformly distributed gray level, which gives an accurate model of occlusion in a natural setting and hence mimics a natural scene. This target can be used to model the texture transfer through a camera system when a natural scene is captured. In the first part of our study we identify various factors that affect the MTF measured using the ‘Dead Leaves’ chart. These include variations in illumination, distance, exposure time and ISO sensitivity among others. We discuss the main differences of this method with the existing resolution measurement techniques and identify the advantages. In the second part of this study, we propose an improvement to the current texture MTF measurement algorithm. High frequency residual noise in the processed image contains the same frequency content as fine texture detail, and is sometimes reported as such, thereby leading to inaccurate results. A wavelet thresholding based denoising technique is utilized for modeling the noise present in the final captured image. This updated noise model is then used for calculating an accurate texture MTF. We present comparative results for both algorithms under various image capture conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the progress of computer technology, computers are expected to be more intelligent in the interaction with humans, presenting information according to the user's psychological and physiological characteristics. However, computer users with visual problems may encounter difficulties on the perception of icons, menus, and other graphical information displayed on the screen, limiting the efficiency of their interaction with computers. In this dissertation, a personalized and dynamic image precompensation method was developed to improve the visual performance of the computer users with ocular aberrations. The precompensation was applied on the graphical targets before presenting them on the screen, aiming to counteract the visual blurring caused by the ocular aberration of the user's eye. A complete and systematic modeling approach to describe the retinal image formation of the computer user was presented, taking advantage of modeling tools, such as Zernike polynomials, wavefront aberration, Point Spread Function and Modulation Transfer Function. The ocular aberration of the computer user was originally measured by a wavefront aberrometer, as a reference for the precompensation model. The dynamic precompensation was generated based on the resized aberration, with the real-time pupil diameter monitored. The potential visual benefit of the dynamic precompensation method was explored through software simulation, with the aberration data from a real human subject. An "artificial eye'' experiment was conducted by simulating the human eye with a high-definition camera, providing objective evaluation to the image quality after precompensation. In addition, an empirical evaluation with 20 human participants was also designed and implemented, involving image recognition tests performed under a more realistic viewing environment of computer use. The statistical analysis results of the empirical experiment confirmed the effectiveness of the dynamic precompensation method, by showing significant improvement on the recognition accuracy. The merit and necessity of the dynamic precompensation were also substantiated by comparing it with the static precompensation. The visual benefit of the dynamic precompensation was further confirmed by the subjective assessments collected from the evaluation participants.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a novel method to generate ultrawideband (UWB) doublets is proposed and experimentally demonstrated, which is based on exploiting the cross-phase modulation in a semiconductor optical amplifier (SOA). The key component is an integrated SOA Mach-Zehnder interferometer pumped with an optical carrier modulated by a Gaussian pulse. The transfer function of the nonlinear conversion process leads to the generation of UWB doublet pulses by tuning the SOA currents to different values.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Multilevel converters can achieve an overall effective switch frequency multiplication and consequent ripple reduction through the cancellation of the lowest order switch frequency terms. This paper investigates the harmonic content and the frequency response of these multimodulator converters. It is shown that the transfer function of uniformly sampled modulators is a bessel function associated with the inherent sampling process. Naturally sampled modulators have a flat transfer function, but multiple switchings per switch cycle will occur unless the input is slew-rate limited. Lower sideband harmonics of the effective carrier frequency and, in uniform converters, harmonics of the input signal also limit the useful bandwidth. Observations about the effect of the number of converters, their type (naturally or uniformly sampled), and the ratio of modulating frequency and switch frequency are made.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Simultaneous acquisition of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) aims to disentangle the description of brain processes by exploiting the advantages of each technique. Most studies in this field focus on exploring the relationships between fMRI signals and the power spectrum at some specific frequency bands (alpha, beta, etc.). On the other hand, brain mapping of EEG signals (e.g., interictal spikes in epileptic patients) usually assumes an haemodynamic response function for a parametric analysis applying the GLM, as a rough approximation. The integration of the information provided by the high spatial resolution of MR images and the high temporal resolution of EEG may be improved by referencing them by transfer functions, which allows the identification of neural driven areas without strong assumptions about haemodynamic response shapes or brain haemodynamic`s homogeneity. The difference on sampling rate is the first obstacle for a full integration of EEG and fMRI information. Moreover, a parametric specification of a function representing the commonalities of both signals is not established. In this study, we introduce a new data-driven method for estimating the transfer function from EEG signal to fMRI signal at EEG sampling rate. This approach avoids EEG subsampling to fMRI time resolution and naturally provides a test for EEG predictive power over BOLD signal fluctuations, in a well-established statistical framework. We illustrate this concept in resting state (eyes closed) and visual simultaneous fMRI-EEG experiments. The results point out that it is possible to predict the BOLD fluctuations in occipital cortex by using EEG measurements. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Because it is not known where in the reflex arch, i.e., afference, central nervous system or efferences, hyperglycemia affects baroreflex function, the present study examined the effect of short-term (30 min) hyperglycemia on aortic depressor nerve function measured by a mean arterial pressure vs aortic depressor nerve activity curve, fitted by sigmoidal regression, or by cross-spectral analysis between mean arterial pressure and aortic depressor nerve activity. Anesthetized male Wistar rats received an intravenous bolus (0.25 mL) injection, followed by 30 min of infusion (1 mL/h) of 30% glucose (N = 14). Control groups received a bolus injection and infusion of 0.9% saline (N = 14), or 30% mannitol (N = 14). Glucose significantly increased both blood glucose and plasma osmolarity (P < 0.05). Mean arterial pressure did not change after glucose, saline or mannitol infusion. Mean arterial pressure vs nerve activity curves were identical before and 10 and 30 min after the beginning of glucose, saline or mannitol infusion. Slow (0.3 Hz) oscillations of arterial pressure were induced by controlled bleeding, and cross-spectral analysis was applied to arterial pressure and aortic nerve activity. Transfer function magnitude (aortic depressor nerve activity/mean arterial pressure ratio in the frequency domain) was calculated as an index of gain of the aortic depressor nerve. Transfer function magnitude was similar in all groups during induced or spontaneous oscillations of arterial pressure. In conclusion, the present study demonstrates, by means of two different approaches for assessing baroreceptor function, that aortic depressor nerve activity was not altered by short-term (30 min) hyperglycemia.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this work, a performance analysis of transmission schemes employing turbo trellis coded modulation. In general, the performance analysis of such schemes is guided by evaluating the error probability of these schemes. The exact evaluation of this probability is very complex and inefficient from the computational point of view, a widely used alternative is the use of union bound of error probability, because of its easy implementation and computational produce bounds that converge quickly. Since it is the union bound, it should use to expurge some elements of distance spectrum to obtain a tight bound. The main contribution of this work is that the listing proposal is carried out from the puncturing at the level of symbol rather than bit-level as in most works of literature. The main reason for using the symbol level puncturing lies in the fact that the enummerating function of the turbo scheme is obtained directly from complex sequences of signals through the trellis and not indirectly from the binary sequences that require further binary to complex mapping, as proposed by previous works. Thus, algorithms can be applied through matrix from the adjacency matrix, which is obtained by calculating the distances of the complex sequences of the trellis. This work also presents two matrix algorithms for state reduction and the evaluation of the transfer function of this. The results presented in comparisons of the bounds obtained using the proposed technique with some turbo codes of the literature corroborate the proposition of this paper that the expurgated bounds obtained are quite tight and matrix algorithms are easily implemented in any programming software language