19 resultados para large spatial scale

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gamma activity to stationary grating stimuli was studied non-invasively using MEG recordings in humans. Using a spatial filtering technique, we localized gamma activity to primary visual cortex. We tested the hypothesis that spatial frequency properties of visual stimuli may be related to the temporal frequency characteristics of the associated cortical responses. We devised a method to assess temporal frequency differences between stimulus-related responses that typically exhibit complex spectral shapes. We applied this methodology to either single-trial (induced) or time-averaged (evoked) responses in four frequency ranges (0-40, 20-60, 40-80 and 60-100 Hz) and two time windows (either the entire duration of stimulus presentation or the first second following stimulus onset). Our results suggest that stimuli of varying spatial frequency induce responses that exhibit significantly different temporal frequency characteristics. These effects were particularly accentuated for induced responses in the classical gamma frequency band (20-60 Hz) analyzed over the entire duration of stimulus presentation. Strikingly, examining the first second of the responses following stimulus onset resulted in significant loss in stimulus specificity, suggesting that late signal components contain functionally relevant information. These findings advocate a functional role of gamma activity in sensory representation. We suggest that stimulus specific frequency characteristics of MEG signals can be mapped to processes of neuronal synchronization within the framework of coupled dynamical systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Masking is said to occur when a mask stimulus interferes with the visibility of a target (test) stimulus. One widely held view of this process supposes interactions between mask and test mechanisms (cross-channel masking), and explicit models (e.g., J. M. Foley, 1994) have proposed that the interactions are inhibitory. Unlike a within-channel model, where masking involves the combination of mask and test stimulus within a single mechanism, this cross-channel inhibitory model predicts that the mask should attenuate the perceived contrast of a test stimulus. Another possibility is that masking is due to an increase in noise, in which case, perception of contrast should be unaffected once the signal exceeds detection threshold. We use circular patches and annuli of sine-wave grating in contrast detection and contrast matching experiments to test these hypotheses and investigate interactions across spatial frequency, orientation, field position, and eye of origin. In both types of experiments we found substantial effects of masking that can occur over a factor of 3 in spatial frequency, 45° in orientation, across different field positions and between different eyes. We found the effects to be greatest at the lowest test spatial frequency we used (0.46 c/deg), and when the mask and test differed in all four dimensions simultaneously. This is surprising in light of previous work where it was concluded that suppression from the surround was strictly monocular (C. Chubb, G. Sperling, & J. A. Solomon, 1989). The results confirm that above detection threshold, cross-channel masking involves contrast suppression and not (purely) mask-induced noise. We conclude that cross-channel masking can be a powerful phenomenon, particularly at low test spatial frequencies and when mask and test are presented to different eyes. © 2004 ARVO.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Perception of Mach bands may be explained by spatial filtering ('lateral inhibition') that can be approximated by 2nd derivative computation, and several alternative models have been proposed. To distinguish between them, we used a novel set of ‘generalised Gaussian’ images, in which the sharp ramp-plateau junction of the Mach ramp was replaced by smoother transitions. The images ranged from a slightly blurred Mach ramp to a Gaussian edge and beyond, and also included a sine-wave edge. The probability of seeing Mach Bands increased with the (relative) sharpness of the junction, but was largely independent of absolute spatial scale. These data did not fit the predictions of MIRAGE, nor 2nd derivative computation at a single fine scale. In experiment 2, observers used a cursor to mark features on the same set of images. Data on perceived position of Mach bands did not support the local energy model. Perceived width of Mach bands was poorly explained by a single-scale edge detection model, despite its previous success with Mach edges (Wallis & Georgeson, 2009, Vision Research, 49, 1886-1893). A more successful model used separate (odd and even) scale-space filtering for edges and bars, local peak detection to find candidate features, and the MAX operator to compare odd- and even-filter response maps (Georgeson, VSS 2006, Journal of Vision 6(6), 191a). Mach bands are seen when there is a local peak in the even-filter (bar) response map, AND that peak value exceeds corresponding responses in the odd-filter (edge) maps.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Very large spatially-referenced datasets, for example, those derived from satellite-based sensors which sample across the globe or large monitoring networks of individual sensors, are becoming increasingly common and more widely available for use in environmental decision making. In large or dense sensor networks, huge quantities of data can be collected over small time periods. In many applications the generation of maps, or predictions at specific locations, from the data in (near) real-time is crucial. Geostatistical operations such as interpolation are vital in this map-generation process and in emergency situations, the resulting predictions need to be available almost instantly, so that decision makers can make informed decisions and define risk and evacuation zones. It is also helpful when analysing data in less time critical applications, for example when interacting directly with the data for exploratory analysis, that the algorithms are responsive within a reasonable time frame. Performing geostatistical analysis on such large spatial datasets can present a number of problems, particularly in the case where maximum likelihood. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively. Most modern commodity hardware has at least 2 processor cores if not more. Other mechanisms for allowing parallel computation such as Grid based systems are also becoming increasingly commonly available. However, currently there seems to be little interest in exploiting this extra processing power within the context of geostatistics. In this paper we review the existing parallel approaches for geostatistics. By recognising that diffeerent natural parallelisms exist and can be exploited depending on whether the dataset is sparsely or densely sampled with respect to the range of variation, we introduce two contrasting novel implementations of parallel algorithms based on approximating the data likelihood extending the methods of Vecchia [1988] and Tresp [2000]. Using parallel maximum likelihood variogram estimation and parallel prediction algorithms we show that computational time can be significantly reduced. We demonstrate this with both sparsely sampled data and densely sampled data on a variety of architectures ranging from the common dual core processor, found in many modern desktop computers, to large multi-node super computers. To highlight the strengths and weaknesses of the diffeerent methods we employ synthetic data sets and go on to show how the methods allow maximum likelihood based inference on the exhaustive Walker Lake data set.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The fundamental problem faced by noninvasive neuroimaging techniques such as EEG/MEG1 is to elucidate functionally important aspects of the microscopic neuronal network dynamics from macroscopic aggregate measurements. Due to the mixing of the activities of large neuronal populations in the observed macroscopic aggregate, recovering the underlying network that generates the signal in the absence of any additional information represents a considerable challenge. Recent MEG studies have shown that macroscopic measurements contain sufficient information to allow the differentiation between patterns of activity, which are likely to represent different stimulus-specific collective modes in the underlying network (Hadjipapas, A., Adjamian, P., Swettenham, J.B., Holliday, I.E., Barnes, G.R., 2007. Stimuli of varying spatial scale induce gamma activity with distinct temporal characteristics in human visual cortex. NeuroImage 35, 518–530). The next question arising in this context is whether aspects of collective network activity can be recovered from a macroscopic aggregate signal. We propose that this issue is most appropriately addressed if MEG/EEG signals are to be viewed as macroscopic aggregates arising from networks of coupled systems as opposed to aggregates across a mass of largely independent neural systems. We show that collective modes arising in a network of simulated coupled systems can be indeed recovered from the macroscopic aggregate. Moreover, we show that nonlinear state space methods yield a good approximation of the number of effective degrees of freedom in the network. Importantly, information about hidden variables, which do not directly contribute to the aggregate signal, can also be recovered. Finally, this theoretical framework can be applied to experimental MEG/EEG data in the future, enabling the inference of state dependent changes in the degree of local synchrony in the underlying network.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Most existing color-based tracking algorithms utilize the statistical color information of the object as the tracking clues, without maintaining the spatial structure within a single chromatic image. Recently, the researches on the multilinear algebra provide the possibility to hold the spatial structural relationship in a representation of the image ensembles. In this paper, a third-order color tensor is constructed to represent the object to be tracked. Considering the influence of the environment changing on the tracking, the biased discriminant analysis (BDA) is extended to the tensor biased discriminant analysis (TBDA) for distinguishing the object from the background. At the same time, an incremental scheme for the TBDA is developed for the tensor biased discriminant subspace online learning, which can be used to adapt to the appearance variant of both the object and background. The experimental results show that the proposed method can track objects precisely undergoing large pose, scale and lighting changes, as well as partial occlusion. © 2009 Elsevier B.V.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Simple features such as edges are the building blocks of spatial vision, and so I ask: how arevisual features and their properties (location, blur and contrast) derived from the responses ofspatial filters in early vision; how are these elementary visual signals combined across the twoeyes; and when are they not combined? Our psychophysical evidence from blur-matchingexperiments strongly supports a model in which edges are found at the spatial peaks ofresponse of odd-symmetric receptive fields (gradient operators), and their blur B is givenby the spatial scale of the most active operator. This model can explain some surprisingaspects of blur perception: edges look sharper when they are low contrast, and when theirlength is made shorter. Our experiments on binocular fusion of blurred edges show that singlevision is maintained for disparities up to about 2.5*B, followed by diplopia or suppression ofone edge at larger disparities. Edges of opposite polarity never fuse. Fusion may be served bybinocular combination of monocular gradient operators, but that combination - involvingbinocular summation and interocular suppression - is not completely understood.In particular, linear summation (supported by psychophysical and physiological evidence)predicts that fused edges should look more blurred with increasing disparity (up to 2.5*B),but results surprisingly show that edge blur appears constant across all disparities, whetherfused or diplopic. Finally, when edges of very different blur are shown to the left and righteyes fusion may not occur, but perceived blur is not simply given by the sharper edge, nor bythe higher contrast. Instead, it is the ratio of contrast to blur that matters: the edge with theAbstracts 1237steeper gradient dominates perception. The early stages of binocular spatial vision speak thelanguage of luminance gradients.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We studied the relationship between the decline in sensitivity that occurs with eccentricity for stimuli of different spatial scale defined by either luminance (LM) or contrast (CM) modulation. We show that the detectability of CM stimuli declines with eccentricity in a spatial frequency-dependent manner, and that the rate of sensitivity decline for CM stimuli is roughly that expected from their 1st order carriers, except, possibly, at finer scales. Using an equivalent noise paradigm, we investigated the possible reasons for why the foveal sensitivity for detecting LM and CM stimuli differs as well as the reason why the detectability of 1st order stimuli declines with eccentricity. We show the former can be modeled by an increase in internal noise whereas the latter involves both an increase in internal noise and a loss of efficiency. To encompass both the threshold and suprathreshold transfer properties of peripheral vision, we propose a model in terms of the contrast gain of the underlying mechanisms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We have shown previously that a template model for edge perception successfully predicts perceived blur for a variety of edge profiles (Georgeson, 2001 Journal of Vision 1 438a; Barbieri-Hesse and Georgeson, 2002 Perception 31 Supplement, 54). This study concerns the perceived contrast of edges. Our model spatially differentiates the luminance profile, half-wave rectifies this first derivative, and then differentiates again to create the edge's 'signature'. The spatial scale of the signature is evaluated by filtering it with a set of Gaussian derivative operators. This process finds the correlation between the signature and each operator kernel at each position. These kernels therefore act as templates, and the position and scale of the best-fitting template indicate the position and blur of the edge. Our previous finding, that reducing edge contrast reduces perceived blur, can be explained by replacing the half-wave rectifier with a smooth, biased rectifier function (May and Georgeson, 2003 Perception 32 388; May and Georgeson, 2003 Perception 32 Supplement, 46). With the half-wave rectifier, the peak template response R to a Gaussian edge with contrast C and scale s is given by: R=Cp-1/4s-3/2. Hence, edge contrast can be estimated from response magnitude and blur: C=Rp1/4s3/2. Use of this equation with the modified rectifier predicts that perceived contrast will decrease with increasing blur, particularly at low contrasts. Contrast-matching experiments supported this prediction. In addition, the model correctly predicts the perceived contrast of Gaussian edges modified either by spatial truncation or by the addition of a ramp.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We studied the visual mechanisms that encode edge blur in images. Our previous work suggested that the visual system spatially differentiates the luminance profile twice to create the `signature' of the edge, and then evaluates the spatial scale of this signature profile by applying Gaussian derivative templates of different sizes. The scale of the best-fitting template indicates the blur of the edge. In blur-matching experiments, a staircase procedure was used to adjust the blur of a comparison edge (40% contrast, 0.3 s duration) until it appeared to match the blur of test edges at different contrasts (5% - 40%) and blurs (6 - 32 min of arc). Results showed that lower-contrast edges looked progressively sharper. We also added a linear luminance gradient to blurred test edges. When the added gradient was of opposite polarity to the edge gradient, it made the edge look progressively sharper. Both effects can be explained quantitatively by the action of a half-wave rectifying nonlinearity that sits between the first and second (linear) differentiating stages. This rectifier was introduced to account for a range of other effects on perceived blur (Barbieri-Hesse and Georgeson, 2002 Perception 31 Supplement, 54), but it readily predicts the influence of the negative ramp. The effect of contrast arises because the rectifier has a threshold: it not only suppresses negative values but also small positive values. At low contrasts, more of the gradient profile falls below threshold and its effective spatial scale shrinks in size, leading to perceived sharpening.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We studied the visual mechanisms that encode edge blur in images. Our previous work suggested that the visual system spatially differentiates the luminance profile twice to create the 'signature' of the edge, and then evaluates the spatial scale of this signature profile by applying Gaussian derivative templates of different sizes. The scale of the best-fitting template indicates the blur of the edge. In blur-matching experiments, a staircase procedure was used to adjust the blur of a comparison edge (40% contrast, 0.3 s duration) until it appeared to match the blur of test edges at different contrasts (5% - 40%) and blurs (6 - 32 min of arc). Results showed that lower-contrast edges looked progressively sharper.We also added a linear luminance gradient to blurred test edges. When the added gradient was of opposite polarity to the edge gradient, it made the edge look progressively sharper. Both effects can be explained quantitatively by the action of a half-wave rectifying nonlinearity that sits between the first and second (linear) differentiating stages. This rectifier was introduced to account for a range of other effects on perceived blur (Barbieri-Hesse and Georgeson, 2002 Perception 31 Supplement, 54), but it readily predicts the influence of the negative ramp. The effect of contrast arises because the rectifier has a threshold: it not only suppresses negative values but also small positive values. At low contrasts, more of the gradient profile falls below threshold and its effective spatial scale shrinks in size, leading to perceived sharpening.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We describe a template model for perception of edge blur and identify a crucial early nonlinearity in this process. The main principle is to spatially filter the edge image to produce a 'signature', and then find which of a set of templates best fits that signature. Psychophysical blur-matching data strongly support the use of a second-derivative signature, coupled to Gaussian first-derivative templates. The spatial scale of the best-fitting template signals the edge blur. This model predicts blur-matching data accurately for a wide variety of Gaussian and non-Gaussian edges, but it suffers a bias when edges of opposite sign come close together in sine-wave gratings and other periodic images. This anomaly suggests a second general principle: the region of an image that 'belongs' to a given edge should have a consistent sign or direction of luminance gradient. Segmentation of the gradient profile into regions of common sign is achieved by implementing the second-derivative 'signature' operator as two first-derivative operators separated by a half-wave rectifier. This multiscale system of nonlinear filters predicts perceived blur accurately for periodic and aperiodic waveforms. We also outline its extension to 2-D images and infer the 2-D shape of the receptive fields.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Once familiar with the fire test rig constructed by M Kay, and modified to allow incorporation of both video and computer facilities, Melamine Phosphate production was scaled up from small to large laboratory scale, and then commercial scale production was considered. Samples produced at each stage were compared analytically, visually and in fire testing. The separation and drying stages on a commercial scale lay unresolved practically, due to lack of test facilities. Different cure regimes for the Araldite MY753 and Versamid system were investigated along with weathering tests and cured samples. Surface priming is suggested for large scale application, though on a small scale a clean unprimed surface was thought sufficient. Some samples heat, aired, cracked at the edges but remained bonded on fire testing. An intumescent sample containing Melamine Phosphate, Araldite and Versamid could not be applied to a vertical surface successfully, the viscosity had to be increased to allow application and curing, various additives were tested, two successful ones being fumed silica and a solvent, isopropanol. The low percentages fumed silica used was incorporated into the sample and the viscosity and fire test results compared with a `standard sample'. An expanding graphite incorporated into a standard sample made mixing and application increasingly difficult, due to the lubricating affect of graphite, but the char produced was a good quality, stable char. A suitable formulation could now be mixed, applied and cured, and assuming no adverse interaction between the additives would protect the sample in the event of a fire.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this Thesis, details of a proposed method for the elastic-plastic failure load analysis of complete building structures are given. In order to handle the problem, a computer programme in Atlas Autocode is produced. The structures consist of a number of parallel shear walls and intermediate frames connected by floor slabs. The results of an experimental investigation are given to verify the theoretical results and to demonstrate various factors that may influence the behaviour of these structures. Large full scale practical structures are also analysed by the proposed method and suggestions are made for achieving design economy as well as for extending research in various aspects of this field. The existing programme for elastic-plastic analysis of large frames is modified to allow for the effect of composite action of structural members, i.e. reinforced concrete floor slabs and the supporting steel beams. This modified programme is used to analyse some framed type structures with composite action as well as those which incorporate plates and shear walls. The results obtained are studied to ascertain the influence of composite action and other factors on the load carrying capacity of both bare frames and complete building structures. The theoretical failure load presented in this thesis does not predict the overall failure load of the structure nor does it predict the partial failure load of the shear walls and slabs but it merely predicts the partial failure load of a single frame and assumes that the loss of stiffess of such a frame renders the overall structure unusable. For most structures the analysis proposed in this thesis is likely to break down prematurely due to the failure of the slab and shear wall system and this factor must be taken into account in any future work on such structures. The experimental work reported in this thesis is acknowledged to be unsatisfactory as a verification of the limited theory proposed. In particular perspex was not found to be a suitable material for testing at high loads, micro-concrete may be more suitable.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis investigates various aspects of peripheral vision, which is known not to be as acute as vision at the point of fixation. Differences between foveal and peripheral vision are generally thought to be of a quantitative rather than a qualitative nature. However, the rate of decline in sensitivity between foveal and peripheral vision is known to be task dependent and the mechanisms underlying the differences are not yet well understood. Several experiments described here have employed a psychophysical technique referred to as 'spatial scaling'. Thresholds are determined at several eccentricities for ranges of stimuli which are magnified versions of one another. Using this methodology a parameter called the E2 value is determined, which defines the eccentricity at which stimulus size must double in order to maintain performance equivalent to that at the fovea. Experiments of this type have evaluated the eccentricity dependencies of detection tasks (kinetic and static presentation of a differential light stimulus), resolution tasks (bar orientation discrimination in the presence of flanking stimuli, word recognition and reading performance), and relative localisation tasks (curvature detection and discrimination). Most tasks could be made equal across the visual field by appropriate magnification. E2 values are found to vary widely dependent on the task, and possible reasons for such variations are discussed. The dependence of positional acuity thresholds on stimulus eccentricity, separation and spatial scale parameters is also examined. The relevance of each factor in producing 'Weber's law' for position can be determined from the results.