33 resultados para Flail space model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A multi-scale model of edge coding based on normalized Gaussian derivative filters successfully predicts perceived scale (blur) for a wide variety of edge profiles [Georgeson, M. A., May, K. A., Freeman, T. C. A., & Hesse, G. S. (in press). From filters to features: Scale-space analysis of edge and blur coding in human vision. Journal of Vision]. Our model spatially differentiates the luminance profile, half-wave rectifies the 1st derivative, and then differentiates twice more, to give the 3rd derivative of all regions with a positive gradient. This process is implemented by a set of Gaussian derivative filters with a range of scales. Peaks in the inverted normalized 3rd derivative across space and scale indicate the positions and scales of the edges. The edge contrast can be estimated from the height of the peak. The model provides a veridical estimate of the scale and contrast of edges that have a Gaussian integral profile. Therefore, since scale and contrast are independent stimulus parameters, the model predicts that the perceived value of either of these parameters should be unaffected by changes in the other. This prediction was found to be incorrect: reducing the contrast of an edge made it look sharper, and increasing its scale led to a decrease in the perceived contrast. Our model can account for these effects when the simple half-wave rectifier after the 1st derivative is replaced by a smoothed threshold function described by two parameters. For each subject, one pair of parameters provided a satisfactory fit to the data from all the experiments presented here and in the accompanying paper [May, K. A. & Georgeson, M. A. (2007). Added luminance ramp alters perceived edge blur and contrast: A critical test for derivative-based models of edge coding. Vision Research, 47, 1721-1731]. Thus, when we allow for the visual system's insensitivity to very shallow luminance gradients, our multi-scale model can be extended to edge coding over a wide range of contrasts and blurs. © 2007 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is an alternative model of the 1-way ANOVA called the 'random effects' model or ‘nested’ design in which the objective is not to test specific effects but to estimate the degree of variation of a particular measurement and to compare different sources of variation that influence the measurement in space and/or time. The most important statistics from a random effects model are the components of variance which estimate the variance associated with each of the sources of variation influencing a measurement. The nested design is particularly useful in preliminary experiments designed to estimate different sources of variation and in the planning of appropriate sampling strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we discuss a fast Bayesian extension to kriging algorithms which has been used successfully for fast, automatic mapping in emergency conditions in the Spatial Interpolation Comparison 2004 (SIC2004) exercise. The application of kriging to automatic mapping raises several issues such as robustness, scalability, speed and parameter estimation. Various ad-hoc solutions have been proposed and used extensively but they lack a sound theoretical basis. In this paper we show how observations can be projected onto a representative subset of the data, without losing significant information. This allows the complexity of the algorithm to grow as O(n m 2), where n is the total number of observations and m is the size of the subset of the observations retained for prediction. The main contribution of this paper is to further extend this projective method through the application of space-limited covariance functions, which can be used as an alternative to the commonly used covariance models. In many real world applications the correlation between observations essentially vanishes beyond a certain separation distance. Thus it makes sense to use a covariance model that encompasses this belief since this leads to sparse covariance matrices for which optimised sparse matrix techniques can be used. In the presence of extreme values we show that space-limited covariance functions offer an additional benefit, they maintain the smoothness locally but at the same time lead to a more robust, and compact, global model. We show the performance of this technique coupled with the sparse extension to the kriging algorithm on synthetic data and outline a number of computational benefits such an approach brings. To test the relevance to automatic mapping we apply the method to the data used in a recent comparison of interpolation techniques (SIC2004) to map the levels of background ambient gamma radiation. © Springer-Verlag 2007.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The generation of very short range forecasts of precipitation in the 0-6 h time window is traditionally referred to as nowcasting. Most existing nowcasting systems essentially extrapolate radar observations in some manner, however, very few systems account for the uncertainties involved. Thus deterministic forecast are produced, which have a limited use when decisions must be made, since they have no measure of confidence or spread of the forecast. This paper develops a Bayesian state space modelling framework for quantitative precipitation nowcasting which is probabilistic from conception. The model treats the observations (radar) as noisy realisations of the underlying true precipitation process, recognising that this process can never be completely known, and thus must be represented probabilistically. In the model presented here the dynamics of the precipitation are dominated by advection, so this is a probabilistic extrapolation forecast. The model is designed in such a way as to minimise the computational burden, while maintaining a full, joint representation of the probability density function of the precipitation process. The update and evolution equations avoid the need to sample, thus only one model needs be run as opposed to the more traditional ensemble route. It is shown that the model works well on both simulated and real data, but that further work is required before the model can be used operationally. © 2004 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present a novel method for emulating a stochastic, or random output, computer model and show its application to a complex rabies model. The method is evaluated both in terms of accuracy and computational efficiency on synthetic data and the rabies model. We address the issue of experimental design and provide empirical evidence on the effectiveness of utilizing replicate model evaluations compared to a space-filling design. We employ the Mahalanobis error measure to validate the heteroscedastic Gaussian process based emulator predictions for both the mean and (co)variance. The emulator allows efficient screening to identify important model inputs and better understanding of the complex behaviour of the rabies model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous contrast discrimination experiments have shown that luminance contrast is summed across ocular (T. S. Meese, M. A. Georgeson, & D. H. Baker, 2006) and spatial (T. S. Meese & R. J. Summers, 2007) dimensions at threshold and above. However, is this process sufficiently general to operate across the conjunction of eyes and space? Here we used a "Swiss cheese" stimulus where the blurred "holes" in sine-wave carriers were of equal area to the blurred target ("cheese") regions. The locations of the target regions in the monocular image pairs were interdigitated across eyes such that their binocular sum was a uniform grating. When pedestal contrasts were above threshold, the monocular neural images contained strong evidence that the high-contrast regions in the two eyes did not overlap. Nevertheless, sensitivity to dual contrast increments (i.e., to contrast increments in different locations in the two eyes) was a factor of ∼1.7 greater than to single increments (i.e., increments in a single eye), comparable with conventional binocular summation. This provides evidence for a contiguous area summation process that operates at all contrasts and is influenced little, if at all, by eye of origin. A three-stage model of contrast gain control fitted the results and possessed the properties of ocularity invariance and area invariance owing to its cascade of normalization stages. The implications for a population code for pattern size are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis examines theoretically and experimentally the behaviour of a temporary end plate connection for an aluminium space frame structure, subjected to static loading conditions. Theoretical weld failure criterions are derived from basic fundamentals for both tensile and shear fillet welds. Direct account of weld penetration is taken by incorporating it into a more exact poposed weld model. Theoretical relationships between weld penetration and weld failure loads, failure planes and failure lengths are derived. Also, the variation in strength between tensile and shear fillet welds is shown to be dependent upon the extent of weld penetration achieved/ The proposed tensile weld failure theory is extended to predict the theoretical failure of the welds in the end plate space frame connection. A finite element analysis is conducted to verify the assumptions made for this theory. Experimental hardness and tensile tests are conducted to substantiate the extent and severity of the heat affected zone in aluminium alloy 6082-T6. Simple transverse and longitudinal fillet welded specimens of the same alloy, are tested to failure. These results together with those of other authors are compared to the theoretical predictions made by the proposed weld failure theories and by those made using Kamtekar's and Kato and Morita's failure equations, the -formula and BS 8118. Experimental tests are also conducted on the temporary space frame connection. The maximum stresses and displacements recorded are checked against results obtained from a finite element analysis of the connection. Failure predictions made by the proposed extended weld failure theory, are compared against the experimental results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Statnote 9, we described a one-way analysis of variance (ANOVA) ‘random effects’ model in which the objective was to estimate the degree of variation of a particular measurement and to compare different sources of variation in space and time. The illustrative scenario involved the role of computer keyboards in a University communal computer laboratory as a possible source of microbial contamination of the hands. The study estimated the aerobic colony count of ten selected keyboards with samples taken from two keys per keyboard determined at 9am and 5pm. This type of design is often referred to as a ‘nested’ or ‘hierarchical’ design and the ANOVA estimated the degree of variation: (1) between keyboards, (2) between keys within a keyboard, and (3) between sample times within a key. An alternative to this design is a 'fixed effects' model in which the objective is not to measure sources of variation per se but to estimate differences between specific groups or treatments, which are regarded as 'fixed' or discrete effects. This statnote describes two scenarios utilizing this type of analysis: (1) measuring the degree of bacterial contamination on 2p coins collected from three types of business property, viz., a butcher’s shop, a sandwich shop, and a newsagent and (2) the effectiveness of drugs in the treatment of a fungal eye infection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The visual system pools information from local samples to calculate textural properties. We used a novel stimulus to investigate how signals are combined to improve estimates of global orientation. Stimuli were 29 × 29 element arrays of 4 c/deg log Gabors, spaced 1° apart. A proportion of these elements had a coherent orientation (horizontal/vertical) with the remainder assigned random orientations. The observer's task was to identify the global orientation. The spatial configuration of the signal was modulated by a checkerboard pattern of square checks containing potential signal elements. The other locations contained either randomly oriented elements (''noise check'') or were blank (''blank check''). The distribution of signal elements was manipulated by varying the size and location of the checks within a fixed-diameter stimulus. An ideal detector would only pool responses from potential signal elements. Humans did this for medium check sizes and for large check sizes when a signal was presented in the fovea. For small check sizes, however, the pooling occurred indiscriminately over relevant and irrelevant locations. For these check sizes, thresholds for the noise check and blank check conditions were similar, suggesting that the limiting noise is not induced by the response to the noise elements. The results are described by a model that filters the stimulus at the potential target orientations and then combines the signals over space in two stages. The first is a mandatory integration of local signals over a fixed area, limited by internal noise at each location. The second is a taskdependent combination of the outputs from the first stage. © 2014 ARVO.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new 3D implementation of a hybrid model based on the analogy with two-phase hydrodynamics has been developed for the simulation of liquids at microscale. The idea of the method is to smoothly combine the atomistic description in the molecular dynamics zone with the Landau-Lifshitz fluctuating hydrodynamics representation in the rest of the system in the framework of macroscopic conservation laws through the use of a single "zoom-in" user-defined function s that has the meaning of a partial concentration in the two-phase analogy model. In comparison with our previous works, the implementation has been extended to full 3D simulations for a range of atomistic models in GROMACS from argon to water in equilibrium conditions with a constant or a spatially variable function s. Preliminary results of simulating the diffusion of a small peptide in water are also reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Volunteered Service Composition (VSC) refers to the process of composing volunteered services and resources. These services are typically published to a pool of voluntary resources. The composition aims at satisfying some objectives (e.g. Utilizing storage and eliminating waste, sharing space and optimizing for energy, reducing computational cost etc.). In cases when a single volunteered service does not satisfy a request, VSC will be required. In this paper, we contribute to three approaches for composing volunteered services: these are exhaustive, naïve and utility-based search approach to VSC. The proposed new utility-based approach, for instance, is based on measuring the utility that each volunteered service can provide to each request and systematically selects the one with the highest utility. We found that the utility-based approach tend to be more effective and efficient when selecting services, while minimizing resource waste when compared to the other two approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We compare spot patterns generated by Turing mechanisms with those generated by replication cascades, in a model one-dimensional reaction-diffusion system. We determine the stability region of spot solutions in parameter space as a function of a natural control parameter (feed-rate) where degenerate patterns with different numbers of spots coexist for a fixed feed-rate. While it is possible to generate identical patterns via both mechanisms, we show that replication cascades lead to a wider choice of pattern profiles that can be selected through a tuning of the feed-rate, exploiting hysteresis and directionality effects of the different pattern pathways.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The quantization scheme is suggested for a spatially inhomogeneous 1+1 Bianchi I model. The scheme consists in quantization of the equations of motion and gives the operator (so called quasi-Heisenberg) equations describing explicit evolution of a system. Some particular gauge suitable for quantization is proposed. The Wheeler-DeWitt equation is considered in the vicinity of zero scale factor and it is used to construct a space where the quasi-Heisenberg operators act. Spatial discretization as a UV regularization procedure is suggested for the equations of motion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Contemporary models of contrast integration across space assume that pooling operates uniformly over the target region. For sparse stimuli, where high contrast regions are separated by areas containing no signal, this strategy may be sub-optimal because it pools more noise than signal as area increases. Little is known about the behaviour of human observers for detecting such stimuli. We performed an experiment in which three observers detected regular textures of various areas, and six levels of sparseness. Stimuli were regular grids of horizontal grating micropatches, each 1 cycle wide. We varied the ratio of signals (marks) to gaps (spaces), with mark:space ratios ranging from 1 : 0 (a dense texture with no spaces) to 1 : 24. To compensate for the decline in sensitivity with increasing distance from fixation, we adjusted the stimulus contrast as a function of eccentricity based on previous measurements [Baldwin, Meese & Baker, 2012, J Vis, 12(11):23]. We used the resulting area summation functions and psychometric slopes to test several filter-based models of signal combination. A MAX model failed to predict the thresholds, but did a good job on the slopes. Blanket summation of stimulus energy improved the threshold fit, but did not predict an observed slope increase with mark:space ratio. Our best model used a template matched to the sparseness of the stimulus, and pooled the squared contrast signal over space. Templates for regular patterns have also recently been proposed to explain the regular appearance of slightly irregular textures (Morgan et al, 2012, Proc R Soc B, 279, 2754–2760)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In machine learning, Gaussian process latent variable model (GP-LVM) has been extensively applied in the field of unsupervised dimensionality reduction. When some supervised information, e.g., pairwise constraints or labels of the data, is available, the traditional GP-LVM cannot directly utilize such supervised information to improve the performance of dimensionality reduction. In this case, it is necessary to modify the traditional GP-LVM to make it capable of handing the supervised or semi-supervised learning tasks. For this purpose, we propose a new semi-supervised GP-LVM framework under the pairwise constraints. Through transferring the pairwise constraints in the observed space to the latent space, the constrained priori information on the latent variables can be obtained. Under this constrained priori, the latent variables are optimized by the maximum a posteriori (MAP) algorithm. The effectiveness of the proposed algorithm is demonstrated with experiments on a variety of data sets. © 2010 Elsevier B.V.