890 resultados para PERCOLATION THRESHOLDS
Resumo:
Mode of access: Internet.
Resumo:
Part II written by Bryant J. Cratty and Harriet G. Williams.
Resumo:
Quantifying water losses in paddy fields assists estimation of water availability in rainfed lowland rice ecosystem. Little information is available on water balance in different toposequence positions of sloped rainfed lowland. Therefore, the aim of this work was to quantify percolation and the lateral water flow with special reference to the toposequential variation. Data used for the analysis was collected in Laos and northeast Thailand. Percolation and water tables were measured on a daily basis using a steel cylindrical tube with a lid and perforated PVC tubes, respectively. Percolation rate was determined using linear regression analysis of cumulative percolation. Assuming that the total amount of evaporation and transpiration was equivalent to potential evapotranspiration, the lateral water flow was estimated using the water balance equation. Separate perched water and groundwater tables were observed in paddy fields on coarse-textured soils. The percolation rate varied between 0 and 3 mm/day across locations, and the maximum water loss by lateral movement was more than 20 mm/day. Our results are in agreement with the previously reported findings, and the methodology of estimating water balance components appears reasonably acceptable. With regard to the toposequential variation, the higher the position in the topoesquence, the greater potential for water loss because of higher percolation and lateral flow rates.
Resumo:
In this paper we do a detailed numerical investigation of the fault-tolerant threshold for optical cluster-state quantum computation. Our noise model allows both photon loss and depolarizing noise, as a general proxy for all types of local noise other than photon loss noise. We obtain a threshold region of allowed pairs of values for the two types of noise. Roughly speaking, our results show that scalable optical quantum computing is possible in the combined presence of both noise types, provided that the loss probability is less than 3 X 10(-3) and the depolarization probability is less than 10(-4). Our fault-tolerant protocol involves a number of innovations, including a method for syndrome extraction known as telecorrection, whereby repeated syndrome measurements are guaranteed to agree. This paper is an extended version of Dawson.
Resumo:
In this Letter we numerically investigate the fault-tolerant threshold for optical cluster-state quantum computing. We allow both photon loss noise and depolarizing noise (as a general proxy for all local noise), and obtain a threshold region of allowed pairs of values for the two types of noise. Roughly speaking, our results show that scalable optical quantum computing is possible for photon loss probabilities < 3x10(-3), and for depolarization probabilities < 10(-4).
Resumo:
Many developing south-east Asian governments are not capturing full rent from domestic forest logging operations. Such rent losses are commonly related to institutional failures, where informal institutions tend to dominate the control of forestry activity in spite of weakly enforced regulations. Our model is an attempt to add a new dimension to thinking about deforestation. We present a simple conceptual model, based on individual decisions rather than social or forest planning, which includes the human dynamics of participation in informal activity and the relatively slower ecological dynamics of changes in forest resources. We demonstrate how incumbent informal logging operations can be persistent, and that any spending aimed at replacing the informal institutions can only be successful if it pushes institutional settings past some threshold. (C) 2006 Elsevier B.V. All rights reserved.
Resumo:
The replica method, developed in statistical physics, is employed in conjunction with Gallager's methodology to accurately evaluate zero error noise thresholds for Gallager code ensembles. Our approach generally provides more optimistic evaluations than those reported in the information theory literature for sparse matrices; the difference vanishes as the parity check matrix becomes dense.
Resumo:
Liberalisation has become an increasingly important policy trend, both in the private and public sectors of advanced industrial economies. This article eschews deterministic accounts of liberalisation by considering why government attempts to institute competition may be successful in some cases and not others. It considers the relative strength of explanations focusing on the institutional context, and on the volume and power of sectoral actors supporting liberalisation. These approaches are applied to two attempts to liberalise, one successful and one unsuccessful, within one sector in one nation – higher education in Britain. Each explanation is seen to have some explanatory power, but none is sufficient to explain why competition was generalised in the one case and not the other. The article counsels the need for scholars of liberalisation to be open to multiple explanations which may require the marshalling of multiple sources and types of evidence.
Resumo:
The slope of the two-interval, forced-choice psychometric function (e.g. the Weibull parameter, ß) provides valuable information about the relationship between contrast sensitivity and signal strength. However, little is known about how or whether ß varies with stimulus parameters such as spatiotemporal frequency and stimulus size and shape. A second unresolved issue concerns the best way to estimate the slope of the psychometric function. For example, if an observer is non-stationary (e.g. their threshold drifts between experimental sessions), ß will be underestimated if curve fitting is performed after collapsing the data across experimental sessions. We measured psychometric functions for 2 experienced observers for 14 different spatiotemporal configurations of pulsed or flickering grating patches and bars on each of 8 days. We found ß ˜ 3 to be fairly constant across almost all conditions, consistent with a fixed nonlinear contrast transducer and/or a constant level of intrinsic stimulus uncertainty (e.g. a square law transducer and a low level of intrinsic uncertainty). Our analysis showed that estimating a single ß from results averaged over several experimental sessions was slightly more accurate than averaging multiple estimates from several experimental sessions. However, the small levels of non-stationarity (SD ˜ 0.8 dB) meant that the difference between the estimates was, in practice, negligible.
Resumo:
Sensory sensitivity is typically measured using behavioural techniques (psychophysics), which rely on observers responding to very large numbers of stimulus presentations. Psychophysics can be problematic when working with special populations, such as children or clinical patients, because they may lack the compliance or cognitive skills to perform the behavioural tasks. We used an auditory gap-detection paradigm to develop an accurate measure of sensory threshold derived from passively-recorded MEG data. Auditory evoked responses were elicited by silent gaps of varying durations in an on-going noise stimulus. Source modelling was used to spatially filter the MEG data and sigmoidal ‘cortical psychometric functions’ relating response amplitude to gap duration were obtained for each individual participant. Fitting the functions with a curve and estimating the gap duration at which the evoked response exceeded one standard deviation of the prestimulus brain activity provided an excellent prediction of psychophysical threshold. Thus we have demonstrated that accurate sensory thresholds can be reliably extracted from MEG data recorded while participants listen passively to a stimulus. Because we required no behavioural task, the method is suitable for studies of populations where variations in cognitive skills or vigilance make traditional psychophysics unsuitable.