30 resultados para Transfusion threshold

em Aston University Research Archive


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Error rates of a Boolean perceptron with threshold and either spherical or Ising constraint on the weight vector are calculated for storing patterns from biased input and output distributions derived within a one-step replica symmetry breaking (RSB) treatment. For unbiased output distribution and non-zero stability of the patterns, we find a critical load, α p, above which two solutions to the saddlepoint equations appear; one with higher free energy and zero threshold and a dominant solution with non-zero threshold. We examine this second-order phase transition and the dependence of α p on the required pattern stability, κ, for both one-step RSB and replica symmetry (RS) in the spherical case and for one-step RSB in the Ising case.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multi-agent algorithms inspired by the division of labour in social insects are applied to a problem of distributed mail retrieval in which agents must visit mail producing cities and choose between mail types under certain constraints.The efficiency (i.e. the average amount of mail retrieved per time step), and the flexibility (i.e. the capability of the agents to react to changes in the environment) are investigated both in static and dynamic environments. New rules for mail selection and specialisation are introduced and are shown to exhibit improved efficiency and flexibility compared to existing ones. We employ a genetic algorithm which allows the various rules to evolve and compete. Apart from obtaining optimised parameters for the various rules for any environment, we also observe extinction and speciation. From a more theoretical point of view, in order to avoid finite size effects, most results are obtained for large population sizes. However, we do analyse the influence of population size on the performance. Furthermore, we critically analyse the causes of efficiency loss, derive the exact dynamics of the model in the large system limit under certain conditions, derive theoretical upper bounds for the efficiency, and compare these with the experimental results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A fundamental problem for any visual system with binocular overlap is the combination of information from the two eyes. Electrophysiology shows that binocular integration of luminance contrast occurs early in visual cortex, but a specific systems architecture has not been established for human vision. Here, we address this by performing binocular summation and monocular, binocular, and dichoptic masking experiments for horizontal 1 cycle per degree test and masking gratings. These data reject three previously published proposals, each of which predict too little binocular summation and insufficient dichoptic facilitation. However, a simple development of one of the rejected models (the twin summation model) and a completely new model (the two-stage model) provide very good fits to the data. Two features common to both models are gently accelerating (almost linear) contrast transduction prior to binocular summation and suppressive ocular interactions that contribute to contrast gain control. With all model parameters fixed, both models correctly predict (1) systematic variation in psychometric slopes, (2) dichoptic contrast matching, and (3) high levels of binocular summation for various levels of binocular pedestal contrast. A review of evidence from elsewhere leads us to favor the two-stage model. © 2006 ARVO.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article takes a broader theoretical perspective of the retail life cycle by incorporating threshold periods at important inflection points in the international growth process. Specifically, it considers one threshold interval between an early phase of disjointed international expansion and a more focused, accelerated international growth programme. It concludes that executives need to consider a set of threshold periods during the development and growth of international store operations, understand why these events occur, and consider in what ways to respond to them to overcome and cross the threshold. Salient lessons are extracted from Wal-Mart's experiences during the threshold period for other international managers. © 2005 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

How does the brain combine spatio-temporal signals from the two eyes? We quantified binocular summation as the improvement in 2AFC contrast sensitivity for flickering gratings seen by two eyes compared with one. Binocular gratings in-phase showed sensitivity up to 1.8 times higher, suggesting nearly linear summation of contrasts. The binocular advantage decreased to 1.4 at lower spatial and higher temporal frequencies (0.25 cycle deg-1, 30 Hz). Dichoptic, antiphase gratings showed only a small binocular advantage, by a factor of 1.1 to 1.2, but no evidence of cancellation. We present a signal-processing model to account for the contrast-sensitivity functions and the pattern of binocular summation. It has linear sustained and transient temporal filters, nonlinear transduction, and half-wave rectification that creates ON and OFF channels. Binocular summation occurs separately within ON and OFF channels, thus explaining the phase-specific binocular advantage. The model also accounts for earlier findings on detection of brief antiphase flashes and the surprising finding that dichoptic antiphase flicker is seen as frequency-doubled (Cavonius et al, 1992 Ophthalmic and Physiological Optics 12 153 - 156). [Supported by EPSRC project GR/S74515/01].

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A multi-scale model of edge coding based on normalized Gaussian derivative filters successfully predicts perceived scale (blur) for a wide variety of edge profiles [Georgeson, M. A., May, K. A., Freeman, T. C. A., & Hesse, G. S. (in press). From filters to features: Scale-space analysis of edge and blur coding in human vision. Journal of Vision]. Our model spatially differentiates the luminance profile, half-wave rectifies the 1st derivative, and then differentiates twice more, to give the 3rd derivative of all regions with a positive gradient. This process is implemented by a set of Gaussian derivative filters with a range of scales. Peaks in the inverted normalized 3rd derivative across space and scale indicate the positions and scales of the edges. The edge contrast can be estimated from the height of the peak. The model provides a veridical estimate of the scale and contrast of edges that have a Gaussian integral profile. Therefore, since scale and contrast are independent stimulus parameters, the model predicts that the perceived value of either of these parameters should be unaffected by changes in the other. This prediction was found to be incorrect: reducing the contrast of an edge made it look sharper, and increasing its scale led to a decrease in the perceived contrast. Our model can account for these effects when the simple half-wave rectifier after the 1st derivative is replaced by a smoothed threshold function described by two parameters. For each subject, one pair of parameters provided a satisfactory fit to the data from all the experiments presented here and in the accompanying paper [May, K. A. & Georgeson, M. A. (2007). Added luminance ramp alters perceived edge blur and contrast: A critical test for derivative-based models of edge coding. Vision Research, 47, 1721-1731]. Thus, when we allow for the visual system's insensitivity to very shallow luminance gradients, our multi-scale model can be extended to edge coding over a wide range of contrasts and blurs. © 2007 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Contrast sensitivity is better with two eyes than one. The standard view is that thresholds are about 1.4 (v2) times better with two eyes, and that this arises from monocular responses that, near threshold, are proportional to the square of contrast, followed by binocular summation of the two monocular signals. However, estimates of the threshold ratio in the literature vary from about 1.2 to 1.9, and many early studies had methodological weaknesses. We collected extensive new data, and applied a general model of binocular summation to interpret the threshold ratio. We used horizontal gratings (0.25 - 4 cycles deg-1) flickering sinusoidally (1 - 16 Hz), presented to one or both eyes through frame-alternating ferroelectric goggles with negligible cross-talk, and used a 2AFC staircase method to estimate contrast thresholds and psychometric slopes. Four naive observers completed 20 000 trials each, and their mean threshold ratios were 1.63, 1.69, 1.71, 1.81 - grand mean 1.71 - well above the classical v2. Mean ratios tended to be slightly lower (~1.60) at low spatial or high temporal frequencies. We modelled contrast detection very simply by assuming a single binocular mechanism whose response is proportional to (Lm + Rm) p, followed by fixed additive noise, where L,R are contrasts in the left and right eyes, and m, p are constants. Contrast-gain-control effects were assumed to be negligible near threshold. On this model the threshold ratio is 2(?1/m), implying that m=1.3 on average, while the Weibull psychometric slope (median 3.28) equals 1.247mp, yielding p=2.0. Together, the model and data suggest that, at low contrasts across a wide spatiotemporal frequency range, monocular pathways are nearly linear in their contrast response (m close to 1), while a strongly accelerating nonlinearity (p=2, a 'soft threshold') occurs after binocular summation. [Supported by EPSRC project grant GR/S74515/01]

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The initial image-processing stages of visual cortex are well suited to a local (patchwise) analysis of the viewed scene. But the world's structures extend over space as textures and surfaces, suggesting the need for spatial integration. Most models of contrast vision fall shy of this process because (i) the weak area summation at detection threshold is attributed to probability summation (PS) and (ii) there is little or no advantage of area well above threshold. Both of these views are challenged here. First, it is shown that results at threshold are consistent with linear summation of contrast following retinal inhomogeneity, spatial filtering, nonlinear contrast transduction and multiple sources of additive Gaussian noise. We suggest that the suprathreshold loss of the area advantage in previous studies is due to a concomitant increase in suppression from the pedestal. To overcome this confound, a novel stimulus class is designed where: (i) the observer operates on a constant retinal area, (ii) the target area is controlled within this summation field, and (iii) the pedestal is fixed in size. Using this arrangement, substantial summation is found along the entire masking function, including the region of facilitation. Our analysis shows that PS and uncertainty cannot account for the results, and that suprathreshold summation of contrast extends over at least seven target cycles of grating. © 2007 The Royal Society.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper studies the relationship between leverage and growth, focusing on a large sample of firms in emerging economies of central and eastern Europe (CEE). Contrary to the general wisdom, we find that deviation from optimal leverage, especially, excess leverage is common among firms in many CEE countries. Using firm-level panel data from a group of transition countries, the paper provides support to the hypothesis that leverage positively affects productivity growth but only below an endogenously determined threshold level.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electrical compound action potentials (ECAPs) of the cochlear nerve are used clinically for quick and efficient cochlear implant parameter setting. The ECAP is the aggregate response of nerve fibres at various distances from the recording electrode, and the magnitude of the ECAP is therefore related to the number of fibres excited by a particular stimulus. Current methods, such as the masker-probe or alternating polarity methods, use the ECAP magnitude at various stimulus levels to estimate the neural threshold, from which the parameters are calculated. However, the correlation between ECAP threshold and perceptual threshold is not always good, with ECAP threshold typically being much higher than perceptual threshold. The lower correlation is partly due to the very different pulse rates used for ECAPs (below 100 Hz) and clinical programs (hundreds of Hz up to several kHz). Here we introduce a new method of estimating ECAP threshold for cochlear implants based upon the variability of the response. At neural threshold, where some but not all fibers respond, there is a different response each trial. This inter-trial variability can be detected overlaying the constant variability of the system noise. The large stimulus artefact, which requires additional trials for artefact rejection in the standard ECAP magnitude methods, is not consequential, as it has little variability. The variability method therefore consists of simply presenting a pulse and recording the ECAP, and as such is quicker than other methods. It also has the potential to be run at high rates like clinical programs, potentially improving the correlation with behavioural threshold. Preliminary data is presented that shows a detectable variability increase shortly after probe offset, at probe levels much lower than those producing a detectable ECAP magnitude. Care must be taken, however, to avoid saturation of the recording amplifier saturation; in our experiments we found a gain of 300 to be optimal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report on a theoretical study of an interferometric system in which half of a collimated beam from a broadband optical source is intercepted by a glass slide, the whole beam subsequently being incident on a diffraction grating and the resulting spectrum being viewed using a linear CCD array. Using Fourier theory, we derive the expression of the intensity distribution across the CCD array. This expression is then examined for non-cavity and cavity sources for different cases determined by the direction from which the slide is inserted into the beam and the source bandwidth. The theoretical model shows that the narrower the source linewidth, the higher the deviation of the Talbot bands' visibility (as it is dependent on the path imbalance) from the previously known triangular shape. When the source is a laser diode below threshold, the structure of the CCD signal spectrum is very complex. The number of components present simultaneously increases with the number of grating lines and decreases with the laser cavity length. The model also predicts the appearance of bands in situations not usually associated with Talbot bands.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report on the problems encountered when replacing a tungsten filament lamp with a laser diode in a set-up for displaying Talbot bands using a diffraction grating. It is shown that the band pattern is rather complex and strong interference signals may exist in situations where Talbot bands are not normally expected to appear. In these situations, the period of the bands increases with the optical path difference (OPD). The visibility of bands as dependence on path imbalance is obtained by suitably obstructing halfway into the arms of a Michelson interferometer using opaque screens.