918 resultados para Image pre-processing
Resumo:
OBJECTIVES: To examine the ambiguity tolerance, i.e. the ability to perceive new, contradictory and complex situations as positive challenges, of pre-lingually deafened adolescents who received a cochlear implant after their eighth birthday and to identify those dimensions of ambiguity tolerance which correlate significantly with specific variables of their oral communication. DESIGN AND SETTING: Clinical survey at an academic tertiary referral center. Participants and main outcome measures: A questionnaire concerning communication and subjectively perceived changes compared to the pre-cochlear implant situation was completed by 13 pre-lingually deafened patients aged between 13 and 23 years, who received their cochlear implants between the ages of 8 and 17 years. The results were correlated with the 'Inventory for Measuring Ambiguity Tolerance'. RESULTS: The patients showed a lower ambiguity tolerance with a total score of 134.5 than the normative group with a score of 143.1. There was a positive correlation between the total score for ambiguity tolerance and the frequency of 'use of oral speech', as well as between the subscale 'ambiguity tolerance towards apparently insoluble problems' and all five areas of oral communication that were investigated. Comparison of two variables of oral communication, which shows a significant difference pre- and postoperatively, yields a positive correlation with the subscale 'ambiguity tolerance towards the parental image'. CONCLUSIONS: Pre-lingually deafened juveniles with cochlear implant who increasingly use oral communication seem to regard the limits of a cochlear implant as an interesting challenge rather than an insoluble problem.
Resumo:
Sustainable yields from water wells in hard-rock aquifers are achieved when the well bore intersects fracture networks. Fracture networks are often not readily discernable at the surface. Lineament analysis using remotely sensed satellite imagery has been employed to identify surface expressions of fracturing, and a variety of image-analysis techniques have been successfully applied in “ideal” settings. An ideal setting for lineament detection is where the influences of human development, vegetation, and climatic situations are minimal and hydrogeological conditions and geologic structure are known. There is not yet a well-accepted protocol for mapping lineaments nor have different approaches been compared in non-ideal settings. A new approach for image-processing/synthesis was developed to identify successful satellite imagery types for lineament analysis in non-ideal terrain. Four satellite sensors (ASTER, Landsat7 ETM+, QuickBird, RADARSAT-1) and a digital elevation model were evaluated for lineament analysis in Boaco, Nicaragua, where the landscape is subject to varied vegetative cover, a plethora of anthropogenic features, and frequent cloud cover that limit the availability of optical satellite data. A variety of digital image processing techniques were employed and lineament interpretations were performed to obtain 12 complementary image products that were evaluated subjectively to identify lineaments. The 12 lineament interpretations were synthesized to create a raster image of lineament zone coincidence that shows the level of agreement among the 12 interpretations. A composite lineament interpretation was made using the coincidence raster to restrict lineament observations to areas where multiple interpretations (at least 4) agree. Nine of the 11 previously mapped faults were identified from the coincidence raster. An additional 26 lineaments were identified from the coincidence raster, and the locations of 10 were confirmed by field observation. Four manual pumping tests suggest that well productivity is higher for wells proximal to lineament features. Interpretations from RADARSAT-1 products were superior to interpretations from other sensor products, suggesting that quality lineament interpretation in this region requires anthropogenic features to be minimized and topographic expressions to be maximized. The approach developed in this study has the potential to improve siting wells in non-ideal regions.
Resumo:
The aim of this study was to investigate how oculomotor behaviour depends on the availability of colour information in pictorial stimuli. Forty study participants viewed complex images in colour or grey-scale, while their eye movements were recorded. We found two major effects of colour. First, although colour increases the complexity of an image, fixations on colour images were shorter than on their grey-scale versions. This suggests that colour enhances discriminability and thus affects low-level perceptual processing. Second, colour decreases the similarity of spatial fixation patterns between participants. The role of colour on visual attention seems to be more important than previously assumed, in theoretical as well as methodological terms.
Resumo:
All optical systems that operate in or through the atmosphere suffer from turbulence induced image blur. Both military and civilian surveillance, gun-sighting, and target identification systems are interested in terrestrial imaging over very long horizontal paths, but atmospheric turbulence can blur the resulting images beyond usefulness. My dissertation explores the performance of a multi-frame-blind-deconvolution technique applied under anisoplanatic conditions for both Gaussian and Poisson noise model assumptions. The technique is evaluated for use in reconstructing images of scenes corrupted by turbulence in long horizontal-path imaging scenarios and compared to other speckle imaging techniques. Performance is evaluated via the reconstruction of a common object from three sets of simulated turbulence degraded imagery representing low, moderate and severe turbulence conditions. Each set consisted of 1000 simulated, turbulence degraded images. The MSE performance of the estimator is evaluated as a function of the number of images, and the number of Zernike polynomial terms used to characterize the point spread function. I will compare the mean-square-error (MSE) performance of speckle imaging methods and a maximum-likelihood, multi-frame blind deconvolution (MFBD) method applied to long-path horizontal imaging scenarios. Both methods are used to reconstruct a scene from simulated imagery featuring anisoplanatic turbulence induced aberrations. This comparison is performed over three sets of 1000 simulated images each for low, moderate and severe turbulence-induced image degradation. The comparison shows that speckle-imaging techniques reduce the MSE 46 percent, 42 percent and 47 percent on average for low, moderate, and severe cases, respectively using 15 input frames under daytime conditions and moderate frame rates. Similarly, the MFBD method provides, 40 percent, 29 percent, and 36 percent improvements in MSE on average under the same conditions. The comparison is repeated under low light conditions (less than 100 photons per pixel) where improvements of 39 percent, 29 percent and 27 percent are available using speckle imaging methods and 25 input frames and 38 percent, 34 percent and 33 percent respectively for the MFBD method and 150 input frames. The MFBD estimator is applied to three sets of field data and the results presented. Finally, a combined Bispectrum-MFBD Hybrid estimator is proposed and investigated. This technique consistently provides a lower MSE and smaller variance in the estimate under all three simulated turbulence conditions.
Resumo:
When we actively explore the visual environment, our gaze preferentially selects regions characterized by high contrast and high density of edges, suggesting that the guidance of eye movements during visual exploration is driven to a significant degree by perceptual characteristics of a scene. Converging findings suggest that the selection of the visual target for the upcoming saccade critically depends on a covert shift of spatial attention. However, it is unclear whether attention selects the location of the next fixation uniquely on the basis of global scene structure or additionally on local perceptual information. To investigate the role of spatial attention in scene processing, we examined eye fixation patterns of patients with spatial neglect during unconstrained exploration of natural images and compared these to healthy and brain-injured control participants. We computed luminance, colour, contrast, and edge information contained in image patches surrounding each fixation and evaluated whether they differed from randomly selected image patches. At the global level, neglect patients showed the characteristic ipsilesional shift of the distribution of their fixations. At the local level, patients with neglect and control participants fixated image regions in ipsilesional space that were closely similar with respect to their local feature content. In contrast, when directing their gaze to contralesional (impaired) space neglect patients fixated regions of significantly higher local luminance and lower edge content than controls. These results suggest that intact spatial attention is necessary for the active sampling of local feature content during scene perception.
Resumo:
We present an algorithm for estimating dense image correspondences. Our versatile approach lends itself to various tasks typical for video post-processing, including image morphing, optical flow estimation, stereo rectification, disparity/depth reconstruction, and baseline adjustment. We incorporate recent advances in feature matching, energy minimization, stereo vision, and data clustering into our approach. At the core of our correspondence estimation we use Efficient Belief Propagation for energy minimization. While state-of-the-art algorithms only work on thumbnail-sized images, our novel feature downsampling scheme in combination with a simple, yet efficient data term compression, can cope with high-resolution data. The incorporation of SIFT (Scale-Invariant Feature Transform) features into data term computation further resolves matching ambiguities, making long-range correspondence estimation possible. We detect occluded areas by evaluating the correspondence symmetry, we further apply Geodesic matting to automatically determine plausible values in these regions.
Resumo:
Covert brain activity related to task-free, spontaneous (i.e. unrequested), emotional evaluation of human face images was analysed in 27-channel averaged event-related potential (ERP) map series recorded from 18 healthy subjects while observing random sequences of face images without further instructions. After recording, subjects self-rated each face image on a scale from “liked” to “disliked”. These ratings were used to dichotomize the face images into the affective evaluation categories of “liked” and “disliked” for each subject and the subjects into the affective attitudes of “philanthropists” and “misanthropists” (depending on their mean rating across images). Event-related map series were averaged for “liked” and “disliked” face images and for “philanthropists” and “misanthropists”. The spatial configuration (landscape) of the electric field maps was assessed numerically by the electric gravity center, a conservative estimate of the mean location of all intracerebral, active, electric sources. Differences in electric gravity center location indicate activity of different neuronal populations. The electric gravity center locations of all event-related maps were averaged over the entire stimulus-on time (450 ms). The mean electric gravity center for disliked faces was located (significant across subjects) more to the right and somewhat more posterior than for liked faces. Similar differences were found between the mean electric gravity centers of misanthropists (more right and posterior) and philanthropists. Our neurophysiological findings are in line with neuropsychological findings, revealing visual emotional processing to depend on affective evaluation category and affective attitude, and extending the conclusions to a paradigm without directed task.
Resumo:
Image denoising methods have been implemented in both spatial and transform domains. Each domain has its advantages and shortcomings, which can be complemented by each other. State-of-the-art methods like block-matching 3D filtering (BM3D) therefore combine both domains. However, implementation of such methods is not trivial. We offer a hybrid method that is surprisingly easy to implement and yet rivals BM3D in quality.
Resumo:
Morphometric investigations using a point and intersection counting strategy in the lung often are not able to reveal the full set of morphologic changes. This happens particularly when structural modifications are not expressed in terms of volume density changes and when rough and fine surface density alterations cancel each other at different magnifications. Making use of digital image processing, we present a methodological approach that allows to easily and quickly quantify changes of the geometrical properties of the parenchymal lung structure and reflects closely the visual appreciation of the changes. Randomly sampled digital images from light microscopic sections of lung parenchyma are filtered, binarized, and skeletonized. The lung septa are thus represented as a single-pixel wide line network with nodal points and end points and the corresponding internodal and end segments. By automatically counting the number of points and measuring the lengths of the skeletal segments, the lung architecture can be characterized and very subtle structural changes can be detected. This new methodological approach to lung structure analysis is highly sensitive to morphological changes in the parenchyma: it detected highly significant quantitative alterations in the structure of lungs of rats treated with a glucocorticoid hormone, where the classical morphometry had partly failed.
Resumo:
Quantitative imaging with 18F-FDG PET/CT has the potential to provide an in vivo assessment of response to radiotherapy (RT). However, comparing tissue tracer uptake in longitudinal studies is often confounded by variations in patient setup and potential treatment induced gross anatomic changes. These variations make true response monitoring for the same anatomic volume a challenge, not only for tumors, but also for normal organs-at-risk (OAR). The central hypothesis of this study is that more accurate image registration will lead to improved quantitation of tissue response to RT with 18F-FDG PET/CT. Employing an in-house developed “demons” based deformable image registration algorithm, pre-RT tumor and parotid gland volumes can be more accurately mapped to serial functional images. To test the hypothesis, specific aim 1 was designed to analyze whether deformably mapping tumor volumes rather than aligning to bony structures leads to superior tumor response assessment. We found that deformable mapping of the most metabolically avid regions improved response prediction (P<0.05). The positive predictive power for residual disease was 63% compared to 50% for contrast enhanced post-RT CT. Specific aim 2 was designed to use parotid gland standardized uptake value (SUV) as an objective imaging biomarker for salivary toxicity. We found that relative change in parotid gland SUV correlated strongly with salivary toxicity as defined by the RTOG/EORTC late effects analytic scale (Spearman’s ρ = -0.96, P<0.01). Finally, the goal of specific aim 3 was to create a phenomenological dose-SUV response model for the human parotid glands. Utilizing only baseline metabolic function and the planned dose distribution, predicting parotid SUV change or salivary toxicity, based upon specific aim 2, became possible. We found that the predicted and observed parotid SUV relative changes were significantly correlated (Spearman’s ρ = 0.94, P<0.01). The application of deformable image registration to quantitative treatment response monitoring with 18F-FDG PET/CT could have a profound impact on patient management. Accurate and early identification of residual disease may allow for more timely intervention, while the ability to quantify and predict toxicity of normal OAR might permit individualized refinement of radiation treatment plan designs.
Resumo:
Among the many thousand scarabs, scaraboids and other stamp-seal amulets unearthed in Iron Age contexts in Cis- and Transjordan, there are many such seals showing royal Egyptian imagery on their bases. Focusing mainly on Pharaonic motifs, the paper aims to catalogue the principal iconemes, to trace their development throughout the Iron Ages and to extrapolate their significance vis-à-vis the contemporary glyptic assemblages. As will be shown, the royal imagery of the Egyptian king underwent considerable changes during pre-monarchic and monarchic times in Israel/Judah. This allows – to some extent – deducing the perception of the ‘image’ of the Egyptian king in this part of the Southern Levant at the close of the second and during the first centuries of the first millennium BCE. While the local seal production not only vividly copied earlier and contemporary Egyptian prototypes, it also developed idiosyncratic ‘Pharaonic’ motifs that were produced for the local market. On the other hand, imported Egyptian glyptic goods – such as scarabs and other amulet types – reveal further facets of the consumer behavior. They, too, shed light upon the ideological and religious preferences of the local population and illuminate the development of the vernacular attitude towards the Pharaonic symbols of power – including their obvious political and sacred connotations.
Resumo:
PURPOSE: To determine whether a 3-mm isotropic target margin adequately covers the prostate and seminal vesicles (SVs) during administration of an intensity-modulated radiation therapy (IMRT) treatment fraction, assuming that daily image-guided setup is performed just before each fraction. MATERIALS AND METHODS: In-room computed tomographic (CT) scans were acquired immediately before and after a daily treatment fraction in 46 patients with prostate cancer. An eight-field IMRT plan was designed using the pre-fraction CT with a 3-mm margin and subsequently recalculated on the post-fraction CT. For convenience of comparison, dose plans were scaled to full course of treatment (75.6 Gy). Dose coverage was assessed on the post-treatment CT image set. RESULTS: During one treatment fraction (21.4+/-5.5 min), there were reductions in the volumes of the prostate and SVs receiving the prescribed dose (median reduction 0.1% and 1.0%, respectively, p<0.001) and in the minimum dose to 0.1 cm(3) of their volumes (median reduction 0.5 and 1.5 Gy, p<0.001). Of the 46 patients, three patients' prostates and eight patients' SVs did not maintain dose coverage above 70 Gy. Rectal filling correlated with decreased percentage-volume of SV receiving 75.6, 70, and 60 Gy (p<0.02). CONCLUSIONS: The 3-mm intrafractional margin was adequate for prostate dose coverage. However, a significant subset of patients lost SV dose coverage. The rectal volume change significantly affected SV dose coverage. For advanced-stage prostate cancers, we recommend to use larger margins or improve organ immobilization (such as with a rectal balloon) to ensure SV coverage.
Resumo:
Studies in cocaine-dependent human subjects have shown differences in white matter on diffusion tensor imaging (DTI) compared with non-drug-using controls. It is not known whether the differences in fractional anisotropy (FA) seen on DTI in white matter regions of cocaine-dependent humans result from a pre-existing predilection for drug use or purely from cocaine abuse. To study the effect of cocaine on brain white matter, DTI was performed on 24 rats after continuous infusion of cocaine or saline for 4 weeks, followed by brain histology. Voxel-based morphometry analysis showed an 18% FA decrease in the splenium of the corpus callosum (CC) in cocaine-treated animals relative to saline controls. On histology, significant increase in neurofilament expression (125%) and decrease in myelin basic protein (40%) were observed in the same region in cocaine-treated animals. This study supports the hypothesis that chronic cocaine use alters white matter integrity in human CC. Unlike humans, where the FA in the genu differed between cocaine users and non-users, the splenium was affected in rats. These differences between rodent and human findings could be due to several factors that include differences in the brain structure and function between species and/or the dose, timing, and duration of cocaine administration.
Resumo:
The 3' cleavage generating non-polyadenylated animal histone mRNAs depends on the base pairing between U7 snRNA and a conserved histone pre-mRNA downstream element. This interaction is enhanced by a 100 kDa zinc finger protein (ZFP100) that forms a bridge between an RNA hairpin element upstream of the processing site and the U7 small nuclear ribonucleoprotein (snRNP). The N-terminus of Lsm11, a U7-specific Sm-like protein, was shown to be crucial for histone RNA processing and to bind ZFP100. By further analysing these two functions of Lsm11, we find that Lsm11 and ZFP100 can undergo two interactions, i.e. between the Lsm11 N-terminus and the zinc finger repeats of ZFP100, and between the N-terminus of ZFP100 and the Sm domain of Lsm11, respectively. Both interactions are not specific for the two proteins in vitro, but the second interaction is sufficient for a specific recognition of the U7 snRNP by ZFP100 in cell extracts. Furthermore, clustered point mutations in three phylogenetically conserved regions of the Lsm11 N-terminus impair or abolish histone RNA processing. As these mutations have no effect on the two interactions with ZFP100, these protein regions must play other roles in histone RNA processing, e.g. by contacting the pre-mRNA or additional processing factors.