11 resultados para Stochastic model
em Cambridge University Engineering Department Publications Database
Resumo:
Electron multiplication charge-coupled devices (EMCCD) are widely used for photon counting experiments and measurements of low intensity light sources, and are extensively employed in biological fluorescence imaging applications. These devices have a complex statistical behaviour that is often not fully considered in the analysis of EMCCD data. Robust and optimal analysis of EMCCD images requires an understanding of their noise properties, in particular to exploit fully the advantages of Bayesian and maximum-likelihood analysis techniques, whose value is increasingly recognised in biological imaging for obtaining robust quantitative measurements from challenging data. To improve our own EMCCD analysis and as an effort to aid that of the wider bioimaging community, we present, explain and discuss a detailed physical model for EMCCD noise properties, giving a likelihood function for image counts in each pixel for a given incident intensity, and we explain how to measure the parameters for this model from various calibration images. © 2013 Hirsch et al.
Resumo:
This paper describes a novel approach to the analysis of supply and demand of water in California. A stochastic model is developed to assess the future supply of and demand for water resources in California. The results are presented in the form of a Sankey diagram where present and stochastically-varying future fluxes of water in California and its sub-regions are traced from source to services by mapping the various transformations of water from when it is first made available for use, through its treatment, recycling and reuse, to its eventual loss in a variety of sinks. This helps to highlight the connections of water with energy and land resources, including the amount of energy used to pump and treat water, the amount of water used for energy production, and the land resources that create a water demand to produce crops for food. By mapping water in this way, policy-makers can more easily understand the competing uses of water, through the identification of the services it delivers (e.g. sanitation, food production, landscaping), the potential opportunities for improving themanagement of the resource and the connections with other resources which are often overlooked in a traditional sector-based management strategy. This paper focuses on a Sankey diagram for water, but the ultimate aim is the visualisation of linked resource futures through inter-connected Sankey diagrams for energy, land and water, tracking changes from the basic resources for all three, their transformations, and the final services they provide.
Resumo:
The movement of chemicals through soil to groundwater is a major cause of degradation of water resources. In many cases, serious human and stock health implications are associated with this form of pollution. The study of the effects of different factors involved in transport phenomena can provide valuable information to find the best remediation approaches. Numerical models are increasingly being used for predicting or analyzing solute transport processes in soils and groundwater. This article presents the development of a stochastic finite element model for the simulation of contaminant transport through soils with the main focus being on the incorporation of the effects of soil heterogeneity in the model. The governing equations of contaminant transport are presented. The mathematical framework and the numerical implementation of the model are described. The comparison of the results obtained from the developed stochastic model with those obtained from a deterministic method and some experimental results shows that the stochastic model is capable of predicting the transport of solutes in unsaturated soil with higher accuracy than deterministic one. The importance of the consideration of the effects of soil heterogeneity on contaminant fate is highlighted through a sensitivity analysis regarding the variance of saturated hydraulic conductivity as an index of soil heterogeneity. © 2011 John Wiley & Sons, Ltd.
Resumo:
The uncertainty associated with a rainfall-runoff and non-point source loading (NPS) model can be attributed to both the parameterization and model structure. An interesting implication of the areal nature of NPS models is the direct relationship between model structure (i.e. sub-watershed size) and sample size for the parameterization of spatial data. The approach of this research is to find structural limitations in scale for the use of the conceptual NPS model, then examine the scales at which suitable stochastic depictions of key parameter sets can be generated. The overlapping regions are optimal (and possibly the only suitable regions) for conducting meaningful stochastic analysis with a given NPS model. Previous work has sought to find optimal scales for deterministic analysis (where, in fact, calibration can be adjusted to compensate for sub-optimal scale selection); however, analysis of stochastic suitability and uncertainty associated with both the conceptual model and the parameter set, as presented here, is novel; as is the strategy of delineating a watershed based on the uncertainty distribution. The results of this paper demonstrate a narrow range of acceptable model structure for stochastic analysis in the chosen NPS model. In the case examined, the uncertainties associated with parameterization and parameter sensitivity are shown to be outweighed in significance by those resulting from structural and conceptual decisions. © 2011 Copyright IAHS Press.
Resumo:
In speech recognition systems language model (LMs) are often constructed by training and combining multiple n-gram models. They can be either used to represent different genres or tasks found in diverse text sources, or capture stochastic properties of different linguistic symbol sequences, for example, syllables and words. Unsupervised LM adaptation may also be used to further improve robustness to varying styles or tasks. When using these techniques, extensive software changes are often required. In this paper an alternative and more general approach based on weighted finite state transducers (WFSTs) is investigated for LM combination and adaptation. As it is entirely based on well-defined WFST operations, minimum change to decoding tools is needed. A wide range of LM combination configurations can be flexibly supported. An efficient on-the-fly WFST decoding algorithm is also proposed. Significant error rate gains of 7.3% relative were obtained on a state-of-the-art broadcast audio recognition task using a history dependently adapted multi-level LM modelling both syllable and word sequences. ©2010 IEEE.
Resumo:
We present a stochastic simulation technique for subset selection in time series models, based on the use of indicator variables with the Gibbs sampler within a hierarchical Bayesian framework. As an example, the method is applied to the selection of subset linear AR models, in which only significant lags are included. Joint sampling of the indicators and parameters is found to speed convergence. We discuss the possibility of model mixing where the model is not well determined by the data, and the extension of the approach to include non-linear model terms.
Resumo:
Using an entropy argument, it is shown that stochastic context-free grammars (SCFG's) can model sources with hidden branching processes more efficiently than stochastic regular grammars (or equivalently HMM's). However, the automatic estimation of SCFG's using the Inside-Outside algorithm is limited in practice by its O(n3) complexity. In this paper, a novel pre-training algorithm is described which can give significant computational savings. Also, the need for controlling the way that non-terminals are allocated to hidden processes is discussed and a solution is presented in the form of a grammar minimization procedure. © 1990.
Resumo:
This paper describes two applications in speech recognition of the use of stochastic context-free grammars (SCFGs) trained automatically via the Inside-Outside Algorithm. First, SCFGs are used to model VQ encoded speech for isolated word recognition and are compared directly to HMMs used for the same task. It is shown that SCFGs can model this low-level VQ data accurately and that a regular grammar based pre-training algorithm is effective both for reducing training time and obtaining robust solutions. Second, an SCFG is inferred from a transcription of the speech used to train a phoneme-based recognizer in an attempt to model phonotactic constraints. When used as a language model, this SCFG gives improved performance over a comparable regular grammar or bigram. © 1991.