12 resultados para images processing
em Aston University Research Archive
Resumo:
In Fourier domain optical coherence tomography (FD-OCT), a large amount of interference data needs to be resampled from the wavelength domain to the wavenumber domain prior to Fourier transformation. We present an approach to optimize this data processing, using a graphics processing unit (GPU) and parallel processing algorithms. We demonstrate an increased processing and rendering rate over that previously reported by using GPU paged memory to render data in the GPU rather than copying back to the CPU. This avoids unnecessary and slow data transfer, enabling a processing and display rate of well over 524,000 A-scan/s for a single frame. To the best of our knowledge this is the fastest processing demonstrated to date and the first time that FD-OCT processing and rendering has been demonstrated entirely on a GPU.
Resumo:
The perception of an object as a single entity within a visual scene requires that its features are bound together and segregated from the background and/or other objects. Here, we used magnetoencephalography (MEG) to assess the hypothesis that coherent percepts may arise from the synchronized high frequency (gamma) activity between neurons that code features of the same object. We also assessed the role of low frequency (alpha, beta) activity in object processing. The target stimulus (i.e. object) was a small patch of a concentric grating of 3c/°, viewed eccentrically. The background stimulus was either a blank field or a concentric grating of 3c/° periodicity, viewed centrally. With patterned backgrounds, the target stimulus emerged--through rotation about its own centre--as a circular subsection of the background. Data were acquired using a 275-channel whole-head MEG system and analyzed using Synthetic Aperture Magnetometry (SAM), which allows one to generate images of task-related cortical oscillatory power changes within specific frequency bands. Significant oscillatory activity across a broad range of frequencies was evident at the V1/V2 border, and subsequent analyses were based on a virtual electrode at this location. When the target was presented in isolation, we observed that: (i) contralateral stimulation yielded a sustained power increase in gamma activity; and (ii) both contra- and ipsilateral stimulation yielded near identical transient power changes in alpha (and beta) activity. When the target was presented against a patterned background, we observed that: (i) contralateral stimulation yielded an increase in high-gamma (>55 Hz) power together with a decrease in low-gamma (40-55 Hz) power; and (ii) both contra- and ipsilateral stimulation yielded a transient decrease in alpha (and beta) activity, though the reduction tended to be greatest for contralateral stimulation. The opposing power changes across different regions of the gamma spectrum with 'figure/ground' stimulation suggest a possible dual role for gamma rhythms in visual object coding, and provide general support of the binding-by-synchronization hypothesis. As the power changes in alpha and beta activity were largely independent of the spatial location of the target, however, we conclude that their role in object processing may relate principally to changes in visual attention.
Resumo:
Background/Aims: Positron emission tomography has been applied to study cortical activation during human swallowing, but employs radio-isotopes precluding repeated experiments and has to be performed supine, making the task of swallowing difficult. Here we now describe Synthetic Aperture Magnetometry (SAM) as a novel method of localising and imaging the brain's neuronal activity from magnetoencephalographic (MEG) signals to study the cortical processing of human volitional swallowing in the more physiological prone position. Methods: In 3 healthy male volunteers (age 28–36), 151-channel whole cortex MEG (Omega-151, CTF Systems Inc.) was recorded whilst seated during the conditions of repeated volitional wet swallowing (5mls boluses at 0.2Hz) or rest. SAM analysis was then performed using varying spatial filters (5–60Hz) before co-registration with individual MRI brain images. Activation areas were then identified using standard sterotactic space neuro-anatomical maps. In one subject repeat studies were performed to confirm the initial study findings. Results: In all subjects, cortical activation maps for swallowing could be generated using SAM, the strongest activations being seen with 10–20Hz filter settings. The main cortical activations associated with swallowing were in: sensorimotor cortex (BA 3,4), insular cortex and lateral premotor cortex (BA 6,8). Of relevance, each cortical region displayed consistent inter-hemispheric asymmetry, to one or other hemisphere, this being different for each region and for each subject. Intra-subject comparisons of activation localisation and asymmetry showed impressive reproducibility. Conclusion: SAM analysis using MEG is an accurate, repeatable, and reproducible method for studying the brain processing of human swallowing in a more physiological manner and provides novel opportunities for future studies of the brain-gut axis in health and disease.
Resumo:
Neurocognitive models propose a specialized neural system for processing threat-related information, in which the amygdala plays a key role in the analysis of threat cues. fMRI research indicates that the amygdala is sensitive to coarse visual threat relevant information—for example, low spatial frequency (LSF) fearful faces. However, fMRI cannot determine the temporal or spectral characteristics of neural responses. Consequently, we used magnetoencephalography to explore spatiotemporal patterns of activity in the amygdala and cortical regions with blurry (LSF) and normal angry, fearful, and neutral faces. Results demonstrated differences in amygdala activity between LSF threat-related and LSF neutral faces (50-250 msec after face onset). These differences were evident in the theta range (4-8 Hz) and were accompanied by power changes within visual and frontal regions. Our results support the view that the amygdala is involved in the early processing of coarse threat related information and that theta is important in integrating activity within emotion-processing networks.
Resumo:
Fourier-phase information is important in determining the appearance of natural scenes, but the structure of natural-image phase spectra is highly complex and difficult to relate directly to human perceptual processes. This problem is addressed by extending previous investigations of human visual sensitivity to the randomisation and quantisation of Fourier phase in natural images. The salience of the image changes induced by these physical processes is shown to depend critically on the nature of the original phase spectrum of each image, and the processes of randomisation and quantisation are shown to be perceptually equivalent provided that they shift image phase components by the same average amount. These results are explained by assuming that the visual system is sensitive to those phase-domain image changes which also alter certain global higher-order image statistics. This assumption may be used to place constraints on the likely nature of cortical processing: mechanisms which correlate the outputs of a bank of relative-phase-sensitive units are found to be consistent with the patterns of sensitivity reported here.
Resumo:
Magnetoencephalography (MEG) is the measurement of the magnetic fields generated outside the head by the brain’s electrical activity. The technique offers the promise of high temporal and spatial resolution. There is however an ambiguity in the inversion process of estimating what goes on inside the head from what is measured outside. Other techniques, such as functional Magnetic Resonance Imaging (fMRI) have no such inversion problems yet suffer from poorer temporal resolution. In this study we examined metrics of mutual information and linear correlation between volumetric images from the two modalities. Measures of mutual information reveal a significant, non-linear, relationship between MEG and fMRI datasets across a number of frequency bands.
Resumo:
Textured regions in images can be defined as those regions containing a signal which has some measure of randomness. This thesis is concerned with the description of homogeneous texture in terms of a signal model and to develop a means of spatially separating regions of differing texture. A signal model is presented which is based on the assumption that a large class of textures can adequately be represented by their Fourier amplitude spectra only, with the phase spectra modelled by a random process. It is shown that, under mild restrictions, the above model leads to a stationary random process. Results indicate that this assumption is valid for those textures lacking significant local structure. A texture segmentation scheme is described which separates textured regions based on the assumption that each texture has a different distribution of signal energy within its amplitude spectrum. A set of bandpass quadrature filters are applied to the original signal and the envelope of the output of each filter taken. The filters are designed to have maximum mutual energy concentration in both the spatial and spatial frequency domains thus providing high spatial and class resolutions. The outputs of these filters are processed using a multi-resolution classifier which applies a clustering algorithm on the data at a low spatial resolution and then performs a boundary estimation operation in which processing is carried out over a range of spatial resolutions. Results demonstrate a high performance, in terms of the classification error, for a range of synthetic and natural textures
Resumo:
Category-specific disorders are frequently explained by suggesting that living and non-living things are processed in separate subsystems (e.g. Caramazza & Shelton, 1998). If subsystems exist, there should be benefits for normal processing, beyond the influence of structural similarity. However, no previous study has separated the relative influences of similarity and semantic category. We created novel examples of living and non-living things so category and similarity could be manipulated independently. Pre-tests ensured that our images evoked appropriate semantic information and were matched for familiarity. Participants were trained to associate names with the images and then performed a name-verification task under two levels of time pressure. We found no significant advantage for living things alongside strong effects of similarity. Our results suggest that similarity rather than category is the key determinant of speed and accuracy in normal semantic processing. We discuss the implications of this finding for neuropsychological studies. © 2005 Psychology Press Ltd.
Resumo:
This thesis is organised into four parts. In Part 1 relevant literature is reviewed and presented in three chapters. Chapter 1 examines legal and cultural factors in identifying the. boundaries of rape. Chapter 2 discusses idiographic features· and causal characteristics of rape suspects and victims. Chapter 3 reviews the evidence relating to attitudes toward rape,. attribution of responsibility to victims and the routine management of rape cases by the police. Part II comprises an experimental investigation of observer perception of the victims of violent crime. The experiment, examined the processes by which impressions were attributed to victims of personal crime. The results suggested that discrepancies from observers' stereotypes were an important factor in their polarisation of victim ratings. The relevance of. examining . both the structure and process of' impression, formation was highlighted. Part III describes an extensive field study in which the West. Midlands police files on rape for an eight year period (1071-1978) were analysed. The study revealed a large number of interesting findings related to a wide range of relevant features of the crime. Further, the impact .of common misconceptions and "myths" of rape were investigated across the legal and judicial processing of rape cases. The evidence suggests that these "myths" lead·to differential biasing effects at different stages in the process. In the final part of this thesis,. salient issues raised by the experiment and field study .are discussed·within the framework outlined in Part 1. Potential implications for future developments and research: are presented.
Resumo:
This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.
Resumo:
Accurate measurement of intervertebral kinematics of the cervical spine can support the diagnosis of widespread diseases related to neck pain, such as chronic whiplash dysfunction, arthritis, and segmental degeneration. The natural inaccessibility of the spine, its complex anatomy, and the small range of motion only permit concise measurement in vivo. Low dose X-ray fluoroscopy allows time-continuous screening of cervical spine during patient's spontaneous motion. To obtain accurate motion measurements, each vertebra was tracked by means of image processing along a sequence of radiographic images. To obtain a time-continuous representation of motion and to reduce noise in the experimental data, smoothing spline interpolation was used. Estimation of intervertebral motion for cervical segments was obtained by processing patient's fluoroscopic sequence; intervertebral angle and displacement and the instantaneous centre of rotation were computed. The RMS value of fitting errors resulted in about 0.2 degree for rotation and 0.2 mm for displacements. © 2013 Paolo Bifulco et al.
Resumo:
Fluoroscopic images exhibit severe signal-dependent quantum noise, due to the reduced X-ray dose involved in image formation, that is generally modelled as Poisson-distributed. However, image gray-level transformations, commonly applied by fluoroscopic device to enhance contrast, modify the noise statistics and the relationship between image noise variance and expected pixel intensity. Image denoising is essential to improve quality of fluoroscopic images and their clinical information content. Simple average filters are commonly employed in real-time processing, but they tend to blur edges and details. An extensive comparison of advanced denoising algorithms specifically designed for both signal-dependent noise (AAS, BM3Dc, HHM, TLS) and independent additive noise (AV, BM3D, K-SVD) was presented. Simulated test images degraded by various levels of Poisson quantum noise and real clinical fluoroscopic images were considered. Typical gray-level transformations (e.g. white compression) were also applied in order to evaluate their effect on the denoising algorithms. Performances of the algorithms were evaluated in terms of peak-signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR), mean square error (MSE), structural similarity index (SSIM) and computational time. On average, the filters designed for signal-dependent noise provided better image restorations than those assuming additive white Gaussian noise (AWGN). Collaborative denoising strategy was found to be the most effective in denoising of both simulated and real data, also in the presence of image gray-level transformations. White compression, by inherently reducing the greater noise variance of brighter pixels, appeared to support denoising algorithms in performing more effectively. © 2012 Elsevier Ltd. All rights reserved.