22 resultados para Threshold model

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-agent algorithms inspired by the division of labour in social insects are applied to a problem of distributed mail retrieval in which agents must visit mail producing cities and choose between mail types under certain constraints.The efficiency (i.e. the average amount of mail retrieved per time step), and the flexibility (i.e. the capability of the agents to react to changes in the environment) are investigated both in static and dynamic environments. New rules for mail selection and specialisation are introduced and are shown to exhibit improved efficiency and flexibility compared to existing ones. We employ a genetic algorithm which allows the various rules to evolve and compete. Apart from obtaining optimised parameters for the various rules for any environment, we also observe extinction and speciation. From a more theoretical point of view, in order to avoid finite size effects, most results are obtained for large population sizes. However, we do analyse the influence of population size on the performance. Furthermore, we critically analyse the causes of efficiency loss, derive the exact dynamics of the model in the large system limit under certain conditions, derive theoretical upper bounds for the efficiency, and compare these with the experimental results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A nature inspired decentralised multi-agent algorithm is proposed to solve a problem of distributed task allocation in which cities produce and store batches of different mail types. Agents must collect and process the mail batches, without global knowledge of their environment or communication between agents. The problem is constrained so that agents are penalised for switching mail types. When an agent process a mail batch of different type to the previous one, it must undergo a change-over, with repeated change-overs rendering the agent inactive. The efficiency (average amount of mail retrieved), and the flexibility (ability of the agents to react to changes in the environment) are investigated both in static and dynamic environments and with respect to sudden changes. New rules for mail selection and specialisation are proposed and are shown to exhibit improved efficiency and flexibility compared to existing ones. We employ a evolutionary algorithm which allows the various rules to evolve and compete. Apart from obtaining optimised parameters for the various rules for any environment, we also observe extinction and speciation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A nature inspired decentralised multi-agent algorithm is proposed to solve a problem of distributed task selection in which cities produce and store batches of different mail types. Agents must collect and process the mail batches, without a priori knowledge of the available mail at the cities or inter-agent communication. In order to process a different mail type than the previous one, agents must undergo a change-over during which it remains inactive. We propose a threshold based algorithm in order to maximise the overall efficiency (the average amount of mail collected). We show that memory, i.e. the possibility for agents to develop preferences for certain cities, not only leads to emergent cooperation between agents, but also to a significant increase in efficiency (above the theoretical upper limit for any memoryless algorithm), and we systematically investigate the influence of the various model parameters. Finally, we demonstrate the flexibility of the algorithm to changes in circumstances, and its excellent scalability.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Visual perception is dependent not only on low-level sensory input but also on high-level cognitive factors such as attention. In this paper, we sought to determine whether attentional processes can be internally monitored for the purpose of enhancing behavioural performance. To do so, we developed a novel paradigm involving an orientation discrimination task in which observers had the freedom to delay target presentation--by any amount required--until they judged their attentional focus to be complete. Our results show that discrimination performance is significantly improved when individuals self-monitor their level of visual attention and respond only when they perceive it to be maximal. Although target delay times varied widely from trial-to-trial (range 860 ms-12.84 s), we show that their distribution is Gaussian when plotted on a reciprocal latency scale. We further show that the neural basis of the delay times for judging attentional status is well explained by a linear rise-to-threshold model. We conclude that attentional mechanisms can be self-monitored for the purpose of enhancing human decision-making processes, and that the neural basis of such processes can be understood in terms of a simple, yet broadly applicable, linear rise-to-threshold model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We study the role of political accountability as a determinant of corruption and economic growth. Our model identifies two governance regimes defined by the quality of political institutions and shows that the relationship between corruption and growth is regime specific. We use a threshold model to estimate the impact of corruption on growth where corruption is treated as an endogenous variable. We find two governance regimes, conditional on the quality of political institutions. In the regime with high quality political institutions, corruption has a substantial negative impact on growth. In the regime with low quality institutions, corruption has no impact on growth.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A multi-scale model of edge coding based on normalized Gaussian derivative filters successfully predicts perceived scale (blur) for a wide variety of edge profiles [Georgeson, M. A., May, K. A., Freeman, T. C. A., & Hesse, G. S. (in press). From filters to features: Scale-space analysis of edge and blur coding in human vision. Journal of Vision]. Our model spatially differentiates the luminance profile, half-wave rectifies the 1st derivative, and then differentiates twice more, to give the 3rd derivative of all regions with a positive gradient. This process is implemented by a set of Gaussian derivative filters with a range of scales. Peaks in the inverted normalized 3rd derivative across space and scale indicate the positions and scales of the edges. The edge contrast can be estimated from the height of the peak. The model provides a veridical estimate of the scale and contrast of edges that have a Gaussian integral profile. Therefore, since scale and contrast are independent stimulus parameters, the model predicts that the perceived value of either of these parameters should be unaffected by changes in the other. This prediction was found to be incorrect: reducing the contrast of an edge made it look sharper, and increasing its scale led to a decrease in the perceived contrast. Our model can account for these effects when the simple half-wave rectifier after the 1st derivative is replaced by a smoothed threshold function described by two parameters. For each subject, one pair of parameters provided a satisfactory fit to the data from all the experiments presented here and in the accompanying paper [May, K. A. & Georgeson, M. A. (2007). Added luminance ramp alters perceived edge blur and contrast: A critical test for derivative-based models of edge coding. Vision Research, 47, 1721-1731]. Thus, when we allow for the visual system's insensitivity to very shallow luminance gradients, our multi-scale model can be extended to edge coding over a wide range of contrasts and blurs. © 2007 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A fundamental problem for any visual system with binocular overlap is the combination of information from the two eyes. Electrophysiology shows that binocular integration of luminance contrast occurs early in visual cortex, but a specific systems architecture has not been established for human vision. Here, we address this by performing binocular summation and monocular, binocular, and dichoptic masking experiments for horizontal 1 cycle per degree test and masking gratings. These data reject three previously published proposals, each of which predict too little binocular summation and insufficient dichoptic facilitation. However, a simple development of one of the rejected models (the twin summation model) and a completely new model (the two-stage model) provide very good fits to the data. Two features common to both models are gently accelerating (almost linear) contrast transduction prior to binocular summation and suppressive ocular interactions that contribute to contrast gain control. With all model parameters fixed, both models correctly predict (1) systematic variation in psychometric slopes, (2) dichoptic contrast matching, and (3) high levels of binocular summation for various levels of binocular pedestal contrast. A review of evidence from elsewhere leads us to favor the two-stage model. © 2006 ARVO.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

How does the brain combine spatio-temporal signals from the two eyes? We quantified binocular summation as the improvement in 2AFC contrast sensitivity for flickering gratings seen by two eyes compared with one. Binocular gratings in-phase showed sensitivity up to 1.8 times higher, suggesting nearly linear summation of contrasts. The binocular advantage decreased to 1.4 at lower spatial and higher temporal frequencies (0.25 cycle deg-1, 30 Hz). Dichoptic, antiphase gratings showed only a small binocular advantage, by a factor of 1.1 to 1.2, but no evidence of cancellation. We present a signal-processing model to account for the contrast-sensitivity functions and the pattern of binocular summation. It has linear sustained and transient temporal filters, nonlinear transduction, and half-wave rectification that creates ON and OFF channels. Binocular summation occurs separately within ON and OFF channels, thus explaining the phase-specific binocular advantage. The model also accounts for earlier findings on detection of brief antiphase flashes and the surprising finding that dichoptic antiphase flicker is seen as frequency-doubled (Cavonius et al, 1992 Ophthalmic and Physiological Optics 12 153 - 156). [Supported by EPSRC project GR/S74515/01].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Contrast sensitivity is better with two eyes than one. The standard view is that thresholds are about 1.4 (v2) times better with two eyes, and that this arises from monocular responses that, near threshold, are proportional to the square of contrast, followed by binocular summation of the two monocular signals. However, estimates of the threshold ratio in the literature vary from about 1.2 to 1.9, and many early studies had methodological weaknesses. We collected extensive new data, and applied a general model of binocular summation to interpret the threshold ratio. We used horizontal gratings (0.25 - 4 cycles deg-1) flickering sinusoidally (1 - 16 Hz), presented to one or both eyes through frame-alternating ferroelectric goggles with negligible cross-talk, and used a 2AFC staircase method to estimate contrast thresholds and psychometric slopes. Four naive observers completed 20 000 trials each, and their mean threshold ratios were 1.63, 1.69, 1.71, 1.81 - grand mean 1.71 - well above the classical v2. Mean ratios tended to be slightly lower (~1.60) at low spatial or high temporal frequencies. We modelled contrast detection very simply by assuming a single binocular mechanism whose response is proportional to (Lm + Rm) p, followed by fixed additive noise, where L,R are contrasts in the left and right eyes, and m, p are constants. Contrast-gain-control effects were assumed to be negligible near threshold. On this model the threshold ratio is 2(?1/m), implying that m=1.3 on average, while the Weibull psychometric slope (median 3.28) equals 1.247mp, yielding p=2.0. Together, the model and data suggest that, at low contrasts across a wide spatiotemporal frequency range, monocular pathways are nearly linear in their contrast response (m close to 1), while a strongly accelerating nonlinearity (p=2, a 'soft threshold') occurs after binocular summation. [Supported by EPSRC project grant GR/S74515/01]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a simple model that captures the salient properties of distribution networks, and study the possible occurrence of blackouts, i.e., sudden failings of large portions of such networks. The model is defined on a random graph of finite connectivity. The nodes of the graph represent hubs of the network, while the edges of the graph represent the links of the distribution network. Both, the nodes and the edges carry dynamical two state variables representing the functioning or dysfunctional state of the node or link in question. We describe a dynamical process in which the breakdown of a link or node is triggered when the level of maintenance it receives falls below a given threshold. This form of dynamics can lead to situations of catastrophic breakdown, if levels of maintenance are themselves dependent on the functioning of the net, once maintenance levels locally fall below a critical threshold due to fluctuations. We formulate conditions under which such systems can be analyzed in terms of thermodynamic equilibrium techniques, and under these conditions derive a phase diagram characterizing the collective behavior of the system, given its model parameters. The phase diagram is confirmed qualitatively and quantitatively by simulations on explicit realizations of the graph, thus confirming the validity of our approach. © 2007 The American Physical Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report on a theoretical study of an interferometric system in which half of a collimated beam from a broadband optical source is intercepted by a glass slide, the whole beam subsequently being incident on a diffraction grating and the resulting spectrum being viewed using a linear CCD array. Using Fourier theory, we derive the expression of the intensity distribution across the CCD array. This expression is then examined for non-cavity and cavity sources for different cases determined by the direction from which the slide is inserted into the beam and the source bandwidth. The theoretical model shows that the narrower the source linewidth, the higher the deviation of the Talbot bands' visibility (as it is dependent on the path imbalance) from the previously known triangular shape. When the source is a laser diode below threshold, the structure of the CCD signal spectrum is very complex. The number of components present simultaneously increases with the number of grating lines and decreases with the laser cavity length. The model also predicts the appearance of bands in situations not usually associated with Talbot bands.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Preeclampsia (PE) is characterized by widespread endothelial damage with hypertension, proteinuria, glomeruloendotheliosis and elevated soluble Flt-1 (sFlt-1), a natural occurring antagonist of vascular endothelial growth factor (VEGF). Cancer patients receiving anti-VEGF therapy exhibit similar symptoms. We suggested that a decrease in circulating sFlt-1 would alleviate the symptoms associated with PE. Adenoviral (Adv) overexpression of sFlt-1 induced proteinuria, caused glomerular damage and increase in blood pressure in female Balb/c mice. Circulating level of sFlt-1 above 50 ng/ml plasma induced severe vascular damage and glomerular endotheliosis. Albumin concentration in urine was elevated up to 30-fold, compared to control AdvGFP-treated animals. The threshold of kidney damage was in the range of 20-30 ng/ml sFlt-1 in plasma (8-15 ng/ml in urine). Co-administration of AdvsFlt-1 with AdvVEGF to neutralize circulating sFlt-1 resulted in more than a 70% reduction in free sFlt-1 in plasma, more than 80% reduction in urine and rescued the damaging effect of sFlt-1 on the kidneys. This demonstrates that below a critical threshold sFlt-1 fails to elicit damage to the fenestrated endothelium and that co-expression of VEGF is able to rescue effects mediated by sFlt-1 overexpression.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report on a theoretical study of an interferometric system in which half of a collimated beam from a broadband optical source is intercepted by a glass slide, the whole beam subsequently being incident on a diffraction grating and the resulting spectrum being viewed using a linear CCD array. Using Fourier theory, we derive the expression of the intensity distribution across the CCD array. This expression is then examined for non-cavity and cavity sources for different cases determined by the direction from which the slide is inserted into the beam and the source bandwidth. The theoretical model shows that the narrower the source linewidth, the higher the deviation of the Talbot bands' visibility (as it is dependent on the path imbalance) from the previously known triangular shape. When the source is a laser diode below threshold, the structure of the CCD signal spectrum is very complex. The number of components present simultaneously increases with the number of grating lines and decreases with the laser cavity length. The model also predicts the appearance of bands in situations not usually associated with Talbot bands.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Particle breakage due to fluid flow through various geometries can have a major influence on the performance of particle/fluid processes and on the product quality characteristics of particle/fluid products. In this study, whey protein precipitate dispersions were used as a case study to investigate the effect of flow intensity and exposure time on the breakage of these precipitate particles. Computational fluid dynamic (CFD) simulations were performed to evaluate the turbulent eddy dissipation rate (TED) and associated exposure time along various flow geometries. The focus of this work is on the predictive modelling of particle breakage in particle/fluid systems. A number of breakage models were developed to relate TED and exposure time to particle breakage. The suitability of these breakage models was evaluated for their ability to predict the experimentally determined breakage of the whey protein precipitate particles. A "power-law threshold" breakage model was found to provide a satisfactory capability for predicting the breakage of the whey protein precipitate particles. The whey protein precipitate dispersions were propelled through a number of different geometries such as bends, tees and elbows, and the model accurately predicted the mean particle size attained after flow through these geometries. © 2005 Elsevier Ltd. All rights reserved.