869 resultados para spatial model
Resumo:
Spatial data are particularly useful in mobile environments. However, due to the low bandwidth of most wireless networks, developing large spatial database applications becomes a challenging process. In this paper, we provide the first attempt to combine two important techniques, multiresolution spatial data structure and semantic caching, towards efficient spatial query processing in mobile environments. Based on the study of the characteristics of multiresolution spatial data (MSD) and multiresolution spatial query, we propose a new semantic caching model called Multiresolution Semantic Caching (MSC) for caching MSD in mobile environments. MSC enriches the traditional three-category query processing in semantic cache to five categories, thus improving the performance in three ways: 1) a reduction in the amount and complexity of the remainder queries; 2) the redundant transmission of spatial data already residing in a cache is avoided; 3) a provision for satisfactory answers before 100% query results have been transmitted to the client side. Our extensive experiments on a very large and complex real spatial database show that MSC outperforms the traditional semantic caching models significantly
Resumo:
Abstract—This paper describes an electrical model of the ventricles incorporating real geometry and motion. Cardiac geometry and motion is obtained from segmentations of multipleslice MRI time sequences. A static heart model developed previously is deformed to match the observed geometry using a novel shape registration algorithm. The resulting electrocardiograms and body surface potential maps are compared to a static simulation in the resting heart. These results demonstrate that introducing motion into the cardiac model modifies the ECG during the T wave at peak contraction of the ventricles.
Resumo:
This paper presents the creation of 3D statistical shape models of the knee bones and their use to embed information into a segmentation system for MRIs of the knee. We propose utilising the strong spatial relationship between the cartilages and the bones in the knee by embedding this information into the created models. This information can then be used to automate the initialisation of segmentation algorithms for the cartilages. The approach used to automatically generate the 3D statistical shape models of the bones is based on the point distribution model optimisation framework of Davies. Our implementation of this scheme uses a parameterized surface extraction algorithm, which is used as the basis for the optimisation scheme that automatically creates the 3D statistical shape models. The current approach is illustrated by generating 3D statistical shape models of the patella, tibia and femoral bones from a segmented database of the knee. The use of these models to embed spatial relationship information to aid in the automation of segmentation algorithms for the cartilages is then illustrated.
Resumo:
How do signals from the 2 eyes combine and interact? Our recent work has challenged earlier schemes in which monocular contrast signals are subject to square-law transduction followed by summation across eyes and binocular gain control. Much more successful was a new 'two-stage' model in which the initial transducer was almost linear and contrast gain control occurred both pre- and post-binocular summation. Here we extend that work by: (i) exploring the two-dimensional stimulus space (defined by left- and right-eye contrasts) more thoroughly, and (ii) performing contrast discrimination and contrast matching tasks for the same stimuli. Twenty-five base-stimuli made from 1 c/deg patches of horizontal grating, were defined by the factorial combination of 5 contrasts for the left eye (0.3-32%) with five contrasts for the right eye (0.3-32%). Other than in contrast, the gratings in the two eyes were identical. In a 2IFC discrimination task, the base-stimuli were masks (pedestals), where the contrast increment was presented to one eye only. In a matching task, the base-stimuli were standards to which observers matched the contrast of either a monocular or binocular test grating. In the model, discrimination depends on the local gradient of the observer's internal contrast-response function, while matching equates the magnitude (rather than gradient) of response to the test and standard. With all model parameters fixed by previous work, the two-stage model successfully predicted both the discrimination and the matching data and was much more successful than linear or quadratic binocular summation models. These results show that performance measures and perception (contrast discrimination and contrast matching) can be understood in the same theoretical framework for binocular contrast vision. © 2007 VSP.
Resumo:
Edges are key points of information in visual scenes. One important class of models supposes that edges correspond to the steepest parts of the luminance profile, implying that they can be found as peaks and troughs in the response of a gradient (1st derivative) filter, or as zero-crossings in the 2nd derivative (ZCs). We tested those ideas using a stimulus that has no local peaks of gradient and no ZCs, at any scale. The stimulus profile is analogous to the Mach ramp, but it is the luminance gradient (not the absolute luminance) that increases as a linear ramp between two plateaux; the luminance profile is a blurred triangle-wave. For all image-blurs tested, observers marked edges at or close to the corner points in the gradient profile, even though these were not gradient maxima. These Mach edges correspond to peaks and troughs in the 3rd derivative. Thus Mach edges are inconsistent with many standard edge-detection schemes, but are nicely predicted by a recent model that finds edge points with a 2-stage sequence of 1st then 2nd derivative operators, each followed by a half-wave rectifier.
Resumo:
Adapting to blurred images makes in-focus images look too sharp, and vice-versa (Webster et al, 2002 Nature Neuroscience 5 839 - 840). We asked how such blur adaptation is related to contrast adaptation. Georgeson (1985 Spatial Vision 1 103 - 112) found that grating contrast adaptation followed a subtractive rule: perceived (matched) contrast of a grating was fairly well predicted by subtracting some fraction k(~0.3) of the adapting contrast from the test contrast. Here we apply that rule to the responses of a set of spatial filters at different scales and orientations. Blur is encoded by the pattern of filter response magnitudes over scale. We tested two versions - the 'norm model' and 'fatigue model' - against blur-matching data obtained after adaptation to sharpened, in-focus or blurred images. In the fatigue model, filter responses are simply reduced by exposure to the adapter. In the norm model, (a) the visual system is pre-adapted to a focused world and (b) discrepancy between observed and expected responses to the experimental adapter leads to additional reduction (or enhancement) of filter responses during experimental adaptation. The two models are closely related, but only the norm model gave a satisfactory account of results across the four experiments analysed, with one free parameter k. This model implies that the visual system is pre-adapted to focused images, that adapting to in-focus or blank images produces no change in adaptation, and that adapting to sharpened or blurred images changes the state of adaptation, leading to changes in perceived blur or sharpness.
Resumo:
Edge detection is crucial in visual processing. Previous computational and psychophysical models have often used peaks in the gradient or zero-crossings in the 2nd derivative to signal edges. We tested these approaches using a stimulus that has no such features. Its luminance profile was a triangle wave, blurred by a rectangular function. Subjects marked the position and polarity of perceived edges. For all blur widths tested, observers marked edges at or near 3rd derivative maxima, even though these were not 1st derivative maxima or 2nd derivative zero-crossings, at any scale. These results are predicted by a new nonlinear model based on 3rd derivative filtering. As a critical test, we added a ramp of variable slope to the blurred triangle-wave luminance profile. The ramp has no effect on the (linear) 2nd or higher derivatives, but the nonlinear model predicts a shift from seeing two edges to seeing one edge as the ramp gradient increases. Results of two experiments confirmed such a shift, thus supporting the new model. [Supported by the Engineering and Physical Sciences Research Council].
Resumo:
Edges are key points of information in visual scenes. One important class of models supposes that edges correspond to the steepest parts of the luminance profile, implying that they can be found as peaks and troughs in the response of a gradient (first-derivative) filter, or as zero-crossings (ZCs) in the second-derivative. A variety of multi-scale models are based on this idea. We tested this approach by devising a stimulus that has no local peaks of gradient and no ZCs, at any scale. Our stimulus profile is analogous to the classic Mach-band stimulus, but it is the local luminance gradient (not the absolute luminance) that increases as a linear ramp between two plateaux. The luminance profile is a smoothed triangle wave and is obtained by integrating the gradient profile. Subjects used a cursor to mark the position and polarity of perceived edges. For all the ramp-widths tested, observers marked edges at or close to the corner points in the gradient profile, even though these were not gradient maxima. These new Mach edges correspond to peaks and troughs in the third-derivative. They are analogous to Mach bands - light and dark bars are seen where there are no luminance peaks but there are peaks in the second derivative. Here, peaks in the third derivative were seen as light-to-dark edges, troughs as dark-to-light edges. Thus Mach edges are inconsistent with many standard edge detectors, but are nicely predicted by a new model that uses a (nonlinear) third-derivative operator to find edge points.
Resumo:
Blurred edges appear sharper in motion than when they are stationary. We proposed a model of this motion sharpening that invokes a local, nonlinear contrast transducer function (Hammett et al, 1998 Vision Research 38 2099-2108). Response saturation in the transducer compresses or 'clips' the input spatial waveform, rendering the edges as sharper. To explain the increasing distortion of drifting edges at higher speeds, the degree of nonlinearity must increase with speed or temporal frequency. A dynamic contrast gain control before the transducer can account for both the speed dependence and approximate contrast invariance of motion sharpening (Hammett et al, 2003 Vision Research, in press). We show here that this model also predicts perceived sharpening of briefly flashed and flickering edges, and we show that the model can account fairly well for experimental data from all three modes of presentation (motion, flash, and flicker). At moderate durations and lower temporal frequencies the gain control attenuates the input signal, thus protecting it from later compression by the transducer. The gain control is somewhat sluggish, and so it suffers both a slow onset, and loss of power at high temporal frequencies. Consequently, brief presentations and high temporal frequencies of drift and flicker are less protected from distortion, and show greater perceptual sharpening.
Resumo:
We describe a template model for perception of edge blur and identify a crucial early nonlinearity in this process. The main principle is to spatially filter the edge image to produce a 'signature', and then find which of a set of templates best fits that signature. Psychophysical blur-matching data strongly support the use of a second-derivative signature, coupled to Gaussian first-derivative templates. The spatial scale of the best-fitting template signals the edge blur. This model predicts blur-matching data accurately for a wide variety of Gaussian and non-Gaussian edges, but it suffers a bias when edges of opposite sign come close together in sine-wave gratings and other periodic images. This anomaly suggests a second general principle: the region of an image that 'belongs' to a given edge should have a consistent sign or direction of luminance gradient. Segmentation of the gradient profile into regions of common sign is achieved by implementing the second-derivative 'signature' operator as two first-derivative operators separated by a half-wave rectifier. This multiscale system of nonlinear filters predicts perceived blur accurately for periodic and aperiodic waveforms. We also outline its extension to 2-D images and infer the 2-D shape of the receptive fields.
Resumo:
In Alzheimer's disease (AD) brain, beta-amyloid (Abeta) deposits and neurofibrillary tangles (NFT) are not randomly distributed but exhibit a spatial pattern, i.e., a departure from randomness towards regularity or clustering. Studies of the spatial pattern of a lesion may contribute to an understanding of its pathogenesis and therefore, of AD itself. This article describes the statistical methods most commonly used to detect the spatial patterns of brain lesions and the types of spatial patterns exhibited by ß-amyloid deposits and NFT in the cerebral cortex in AD. These studies suggest that within the cerebral cortex, Abeta deposits and NFT exhibit a similar spatial pattern, i.e., an aggregation of individual lesions into clusters which are regularly distributed parallel to the pia mater. The location, size and distribution of these clusters supports the hypothesis that AD is a 'disconnection syndrome' in which degeneration of specific cortical pathways results in the formation of clusters of NFT and Abeta deposits. In addition, a model to explain the development of the pathology within the cerebral cortex is proposed.
Resumo:
Perception of Mach bands may be explained by spatial filtering ('lateral inhibition') that can be approximated by 2nd derivative computation, and several alternative models have been proposed. To distinguish between them, we used a novel set of ‘generalised Gaussian’ images, in which the sharp ramp-plateau junction of the Mach ramp was replaced by smoother transitions. The images ranged from a slightly blurred Mach ramp to a Gaussian edge and beyond, and also included a sine-wave edge. The probability of seeing Mach Bands increased with the (relative) sharpness of the junction, but was largely independent of absolute spatial scale. These data did not fit the predictions of MIRAGE, nor 2nd derivative computation at a single fine scale. In experiment 2, observers used a cursor to mark features on the same set of images. Data on perceived position of Mach bands did not support the local energy model. Perceived width of Mach bands was poorly explained by a single-scale edge detection model, despite its previous success with Mach edges (Wallis & Georgeson, 2009, Vision Research, 49, 1886-1893). A more successful model used separate (odd and even) scale-space filtering for edges and bars, local peak detection to find candidate features, and the MAX operator to compare odd- and even-filter response maps (Georgeson, VSS 2006, Journal of Vision 6(6), 191a). Mach bands are seen when there is a local peak in the even-filter (bar) response map, AND that peak value exceeds corresponding responses in the odd-filter (edge) maps.
Resumo:
A well-known property of orientation-tuned neurons in the visual cortex is that they are suppressed by the superposition of an orthogonal mask. This phenomenon has been explained in terms of physiological constraints (synaptic depression), engineering solutions for components with poor dynamic range (contrast normalization) and fundamental coding strategies for natural images (redundancy reduction). A common but often tacit assumption is that the suppressive process is equally potent at different spatial and temporal scales of analysis. To determine whether it is so, we measured psychophysical cross-orientation masking (XOM) functions for flickering horizontal Gabor stimuli over wide ranges of spatio-temporal frequency and contrast. We found that orthogonal masks raised contrast detection thresholds substantially at low spatial frequencies and high temporal frequencies (high speeds), and that small and unexpected levels of facilitation were evident elsewhere. The data were well fit by a functional model of contrast gain control, where (i) the weight of suppression increased with the ratio of temporal to spatial frequency and (ii) the weight of facilitatory modulation was the same for all conditions, but outcompeted by suppression at higher contrasts. These results (i) provide new constraints for models of primary visual cortex, (ii) associate XOM and facilitation with the transient magno- and sustained parvostreams, respectively, and (iii) reconcile earlier conflicting psychophysical reports on XOM.
Resumo:
Masking is said to occur when a mask stimulus interferes with the visibility of a target (test) stimulus. One widely held view of this process supposes interactions between mask and test mechanisms (cross-channel masking), and explicit models (e.g., J. M. Foley, 1994) have proposed that the interactions are inhibitory. Unlike a within-channel model, where masking involves the combination of mask and test stimulus within a single mechanism, this cross-channel inhibitory model predicts that the mask should attenuate the perceived contrast of a test stimulus. Another possibility is that masking is due to an increase in noise, in which case, perception of contrast should be unaffected once the signal exceeds detection threshold. We use circular patches and annuli of sine-wave grating in contrast detection and contrast matching experiments to test these hypotheses and investigate interactions across spatial frequency, orientation, field position, and eye of origin. In both types of experiments we found substantial effects of masking that can occur over a factor of 3 in spatial frequency, 45° in orientation, across different field positions and between different eyes. We found the effects to be greatest at the lowest test spatial frequency we used (0.46 c/deg), and when the mask and test differed in all four dimensions simultaneously. This is surprising in light of previous work where it was concluded that suppression from the surround was strictly monocular (C. Chubb, G. Sperling, & J. A. Solomon, 1989). The results confirm that above detection threshold, cross-channel masking involves contrast suppression and not (purely) mask-induced noise. We conclude that cross-channel masking can be a powerful phenomenon, particularly at low test spatial frequencies and when mask and test are presented to different eyes. © 2004 ARVO.
Resumo:
Adapting to blurred or sharpened images alters perceived blur of a focused image (M. A. Webster, M. A. Georgeson, & S. M. Webster, 2002). We asked whether blur adaptation results in (a) renormalization of perceived focus or (b) a repulsion aftereffect. Images were checkerboards or 2-D Gaussian noise, whose amplitude spectra had (log-log) slopes from -2 (strongly blurred) to 0 (strongly sharpened). Observers adjusted the spectral slope of a comparison image to match different test slopes after adaptation to blurred or sharpened images. Results did not show repulsion effects but were consistent with some renormalization. Test blur levels at and near a blurred or sharpened adaptation level were matched by more focused slopes (closer to 1/f) but with little or no change in appearance after adaptation to focused (1/f) images. A model of contrast adaptation and blur coding by multiple-scale spatial filters predicts these blur aftereffects and those of Webster et al. (2002). A key proposal is that observers are pre-adapted to natural spectra, and blurred or sharpened spectra induce changes in the state of adaptation. The model illustrates how norms might be encoded and recalibrated in the visual system even when they are represented only implicitly by the distribution of responses across multiple channels.