933 resultados para Random Pore Model
Resumo:
When constructing and using environmental models, it is typical that many of the inputs to the models will not be known perfectly. In some cases, it will be possible to make observations, or occasionally physics-based uncertainty propagation, to ascertain the uncertainty on these inputs. However, such observations are often either not available or even possible, and another approach to characterising the uncertainty on the inputs must be sought. Even when observations are available, if the analysis is being carried out within a Bayesian framework then prior distributions will have to be specified. One option for gathering or at least estimating this information is to employ expert elicitation. Expert elicitation is well studied within statistics and psychology and involves the assessment of the beliefs of a group of experts about an uncertain quantity, (for example an input / parameter within a model), typically in terms of obtaining a probability distribution. One of the challenges in expert elicitation is to minimise the biases that might enter into the judgements made by the individual experts, and then to come to a consensus decision within the group of experts. Effort is made in the elicitation exercise to prevent biases clouding the judgements through well-devised questioning schemes. It is also important that, when reaching a consensus, the experts are exposed to the knowledge of the others in the group. Within the FP7 UncertWeb project (http://www.uncertweb.org/), there is a requirement to build a Webbased tool for expert elicitation. In this paper, we discuss some of the issues of building a Web-based elicitation system - both the technological aspects and the statistical and scientific issues. In particular, we demonstrate two tools: a Web-based system for the elicitation of continuous random variables and a system designed to elicit uncertainty about categorical random variables in the setting of landcover classification uncertainty. The first of these examples is a generic tool developed to elicit uncertainty about univariate continuous random variables. It is designed to be used within an application context and extends the existing SHELF method, adding a web interface and access to metadata. The tool is developed so that it can be readily integrated with environmental models exposed as web services. The second example was developed for the TREES-3 initiative which monitors tropical landcover change through ground-truthing at confluence points. It allows experts to validate the accuracy of automated landcover classifications using site-specific imagery and local knowledge. Experts may provide uncertainty information at various levels: from a general rating of their confidence in a site validation to a numerical ranking of the possible landcover types within a segment. A key challenge in the web based setting is the design of the user interface and the method of interacting between the problem owner and the problem experts. We show the workflow of the elicitation tool, and show how we can represent the final elicited distributions and confusion matrices using UncertML, ready for integration into uncertainty enabled workflows.We also show how the metadata associated with the elicitation exercise is captured and can be referenced from the elicited result, providing crucial lineage information and thus traceability in the decision making process.
Resumo:
In this paper we present a novel method for emulating a stochastic, or random output, computer model and show its application to a complex rabies model. The method is evaluated both in terms of accuracy and computational efficiency on synthetic data and the rabies model. We address the issue of experimental design and provide empirical evidence on the effectiveness of utilizing replicate model evaluations compared to a space-filling design. We employ the Mahalanobis error measure to validate the heteroscedastic Gaussian process based emulator predictions for both the mean and (co)variance. The emulator allows efficient screening to identify important model inputs and better understanding of the complex behaviour of the rabies model.
Resumo:
The occurrence of spalling is a major factor in determining the fire resistance of concrete constructions. The apparently random occurrence of spalling has limited the development and application of fire resistance modelling for concrete structures. This Thesis describes an experimental investigation into the spalling of concrete on exposure to elevated temperatures. It has been shown that spalling may be categorised into four distinct types, aggregate spalling, corner spalling, surface spalling and explosive spalling. Aggregate spalling has been found to be a form of shear failure of aggregates local to the heated surface. The susceptibility of any particular concrete to aggregate spalling can be quantified from parameters which include the coefficients of thermal expansion of both the aggregate and the surrounding mortar, the size and thermal diffusivity of the aggregate and the rate of heating. Corner spalling, which is particularly significant for the fire resistance of concrete columns, is a result of concrete losing its tensile strength at elevated temperatures. Surface spalling is the result of excessive pore pressures within heated concrete. An empirical model has been developed to allow quantification of the pore pressures and a material failure model proposed. The dominant parameters are rate of heating, pore saturation and concrete permeability. Surface spalling may be alleviated by limiting pore pressure development and a number of methods to this end have been evaluated. Explosive spalling involves the catastrophic failure of a concrete element and may be caused by either of two distinct mechanisms. In the first instance, excessive pore pressures can cause explosive spalling, although the effect is limited principally to unloaded or relatively small specimens. A second cause of explosive spalling is where the superimposition of thermally induced stresses on applied load stresses exceed the concrete's strength.
Resumo:
This thesis studied the effect of (i) the number of grating components and (ii) parameter randomisation on root-mean-square (r.m.s.) contrast sensitivity and spatial integration. The effectiveness of spatial integration without external spatial noise depended on the number of equally spaced orientation components in the sum of gratings. The critical area marking the saturation of spatial integration was found to decrease when the number of components increased from 1 to 5-6 but increased again at 8-16 components. The critical area behaved similarly as a function of the number of grating components when stimuli consisted of 3, 6 or 16 components with different orientations and/or phases embedded in spatial noise. Spatial integration seemed to depend on the global Fourier structure of the stimulus. Spatial integration was similar for sums of two vertical cosine or sine gratings with various Michelson contrasts in noise. The critical area for a grating sum was found to be a sum of logarithmic critical areas for the component gratings weighted by their relative Michelson contrasts. The human visual system was modelled as a simple image processor where the visual stimuli is first low-pass filtered by the optical modulation transfer function of the human eye and secondly high-pass filtered, up to the spatial cut-off frequency determined by the lowest neural sampling density, by the neural modulation transfer function of the visual pathways. The internal noise is then added before signal interpretation occurs in the brain. The detection is mediated by a local spatially windowed matched filter. The model was extended to include complex stimuli and its applicability to the data was found to be successful. The shape of spatial integration function was similar for non-randomised and randomised simple and complex gratings. However, orientation and/or phase randomised reduced r.m.s contrast sensitivity by a factor of 2. The effect of parameter randomisation on spatial integration was modelled under the assumption that human observers change the observer strategy from cross-correlation (i.e., a matched filter) to auto-correlation detection when uncertainty is introduced to the task. The model described the data accurately.
Resumo:
This thesis includes analysis of disordered spin ensembles corresponding to Exact Cover, a multi-access channel problem, and composite models combining sparse and dense interactions. The satisfiability problem in Exact Cover is addressed using a statistical analysis of a simple branch and bound algorithm. The algorithm can be formulated in the large system limit as a branching process, for which critical properties can be analysed. Far from the critical point a set of differential equations may be used to model the process, and these are solved by numerical integration and exact bounding methods. The multi-access channel problem is formulated as an equilibrium statistical physics problem for the case of bit transmission on a channel with power control and synchronisation. A sparse code division multiple access method is considered and the optimal detection properties are examined in typical case by use of the replica method, and compared to detection performance achieved by interactive decoding methods. These codes are found to have phenomena closely resembling the well-understood dense codes. The composite model is introduced as an abstraction of canonical sparse and dense disordered spin models. The model includes couplings due to both dense and sparse topologies simultaneously. The new type of codes are shown to outperform sparse and dense codes in some regimes both in optimal performance, and in performance achieved by iterative detection methods in finite systems.
Resumo:
We consider the random input problem for a nonlinear system modeled by the integrable one-dimensional self-focusing nonlinear Schrödinger equation (NLSE). We concentrate on the properties obtained from the direct scattering problem associated with the NLSE. We discuss some general issues regarding soliton creation from random input. We also study the averaged spectral density of random quasilinear waves generated in the NLSE channel for two models of the disordered input field profile. The first model is symmetric complex Gaussian white noise and the second one is a real dichotomous (telegraph) process. For the former model, the closed-form expression for the averaged spectral density is obtained, while for the dichotomous real input we present the small noise perturbative expansion for the same quantity. In the case of the dichotomous input, we also obtain the distribution of minimal pulse width required for a soliton generation. The obtained results can be applied to a multitude of problems including random nonlinear Fraunhoffer diffraction, transmission properties of randomly apodized long period Fiber Bragg gratings, and the propagation of incoherent pulses in optical fibers.
Resumo:
We obtain the exact asymptotic result for the disorder-averaged probability distribution function for a random walk in a biased Sinai model and show that it is characterized by a creeping behavior of the displacement moments with time,
Resumo:
This paper proposes a semiparametric smooth-coefficient stochastic production frontier model where all the coefficients are expressed as some unknown functions of environmental factors. The inefficiency term is multiplicatively decomposed into a scaling function of the environmental factors and a standard truncated normal random variable. A testing procedure is suggested for the relevance of the environmental factors. Monte Carlo study shows plausible ¯nite sample behavior of our proposed estimation and inference procedure. An empirical example is given, where both the semiparametric and standard parametric models are estimated and results are compared.
Resumo:
This thesis explores the process of developing a principled approach for translating a model of mental-health risk expertise into a probabilistic graphical structure. Probabilistic graphical structures can be a combination of graph and probability theory that provide numerous advantages when it comes to the representation of domains involving uncertainty, domains such as the mental health domain. In this thesis the advantages that probabilistic graphical structures offer in representing such domains is built on. The Galatean Risk Screening Tool (GRiST) is a psychological model for mental health risk assessment based on fuzzy sets. In this thesis the knowledge encapsulated in the psychological model was used to develop the structure of the probability graph by exploiting the semantics of the clinical expertise. This thesis describes how a chain graph can be developed from the psychological model to provide a probabilistic evaluation of risk that complements the one generated by GRiST’s clinical expertise by the decomposing of the GRiST knowledge structure in component parts, which were in turned mapped into equivalent probabilistic graphical structures such as Bayesian Belief Nets and Markov Random Fields to produce a composite chain graph that provides a probabilistic classification of risk expertise to complement the expert clinical judgements
Resumo:
A main unsolved problem in the RNA World scenario for the origin of life is how a template-dependent RNA polymerase ribozyme emerged from short RNA oligomers obtained by random polymerization on mineral surfaces. A number of computational studies have shown that the structural repertoire yielded by that process is dominated by topologically simple structures, notably hairpin-like ones. A fraction of these could display RNA ligase activity and catalyze the assembly of larger, eventually functional RNA molecules retaining their previous modular structure: molecular complexity increases but template replication is absent. This allows us to build up a stepwise model of ligation- based, modular evolution that could pave the way to the emergence of a ribozyme with RNA replicase activity, step at which information-driven Darwinian evolution would be triggered. Copyright © 2009 RNA Society.
Resumo:
Fps1p is a glycerol efflux channel from Saccharomyces cerevisiae. In this atypical major intrinsic protein neither of the signature NPA motifs of the family, which are part of the pore, is preserved. To understand the functional consequences of this feature, we analyzed the pseudo-NPA motifs of Fps1p by site-directed mutagenesis and assayed the resultant mutant proteins in vivo. In addition, we took advantage of the fact that the closest bacterial homolog of Fps1p, Escherichia coli GlpF, can be functionally expressed in yeast, thus enabling the analysis in yeast cells of mutations that make this typical major intrinsic protein more similar to Fps1p. We observed that mutations made in Fps1p to "restore" the signature NPA motifs did not substantially affect channel function. In contrast, when GlpF was mutated to resemble Fps1p, all mutants had reduced activity compared with wild type. We rationalized these data by constructing models of one GlpF mutant and of the transmembrane core of Fps1p. Our model predicts that the pore of Fps1p is more flexible than that of GlpF. We discuss the fact that this may accommodate the divergent NPA motifs of Fps1p and that the different pore structures of Fps1p and GlpF may reflect the physiological roles of the two glycerol facilitators.
Resumo:
We study phenomenological scaling theories of the polymer dynamics in random media, employing the existing scaling theories of polymer chains and the percolation statistics. We investigate both the Rouse and the Zimm model for Brownian dynamics and estimate the diffusion constant of the center-of-mass of the chain in such disordered media. For internal dynamics of the chain, we estimate the dynamic exponents. We propose similar scaling theory for the reptation dynamics of the chain in the framework of Flory theory for the disordered medium. The modifications in the case of correlated disorders are also discussed. .
Resumo:
We suggest a model for data losses in a single node (memory buffer) of a packet-switched network (like the Internet) which reduces to one-dimensional discrete random walks with unusual boundary conditions. By construction, the model has critical behavior with a sharp transition from exponentially small to finite losses with increasing data arrival rate. We show that for a finite-capacity buffer at the critical point the loss rate exhibits strong fluctuations and non-Markovian power-law correlations in time, in spite of the Markovian character of the data arrival process.
Resumo:
The concept of random lasers exploiting multiple scattering of photons in an amplifying disordered medium in order to generate coherent light without a traditional laser resonator has attracted a great deal of attention in recent years. This research area lies at the interface of the fundamental theory of disordered systems and laser science. The idea was originally proposed in the context of astrophysics in the 1960s by V.S. Letokhov, who studied scattering with "negative absorption" of the interstellar molecular clouds. Research on random lasers has since developed into a mature experimental and theoretical field. A simple design of such lasers would be promising for potential applications. However, in traditional random lasers the properties of the output radiation are typically characterized by complex features in the spatial, spectral and time domains, making them less attractive than standard laser systems in terms of practical applications. Recently, an interesting and novel type of one-dimensional random laser that operates in a conventional telecommunication fibre without any pre-designed resonator mirrors-random distributed feedback fibre laser-was demonstrated. The positive feedback required for laser generation in random fibre lasers is provided by the Rayleigh scattering from the inhomogeneities of the refractive index that are naturally present in silica glass. In the proposed laser concept, the randomly backscattered light is amplified through the Raman effect, providing distributed gain over distances up to 100km. Although an effective reflection due to the Rayleigh scattering is extremely small (~0.1%), the lasing threshold may be exceeded when a sufficiently large distributed Raman gain is provided. Such a random distributed feedback fibre laser has a number of interesting and attractive features. The fibre waveguide geometry provides transverse confinement, and effectively one-dimensional random distributed feedback leads to the generation of a stationary near-Gaussian beam with a narrow spectrum. A random distributed feedback fibre laser has efficiency and performance that are comparable to and even exceed those of similar conventional fibre lasers. The key features of the generated radiation of random distributed feedback fibre lasers include: a stationary narrow-band continuous modeless spectrum that is free of mode competition, nonlinear power broadening, and an output beam with a Gaussian profile in the fundamental transverse mode (generated both in single mode and multi-mode fibres).This review presents the current status of research in the field of random fibre lasers and shows their potential and perspectives. We start with an introductory overview of conventional distributed feedback lasers and traditional random lasers to set the stage for discussion of random fibre lasers. We then present a theoretical analysis and experimental studies of various random fibre laser configurations, including widely tunable, multi-wavelength, narrow-band generation, and random fibre lasers operating in different spectral bands in the 1-1.6μm range. Then we discuss existing and future applications of random fibre lasers, including telecommunication and distributed long reach sensor systems. A theoretical description of random lasers is very challenging and is strongly linked with the theory of disordered systems and kinetic theory. We outline two key models governing the generation of random fibre lasers: the average power balance model and the nonlinear Schrödinger equation based model. Recently invented random distributed feedback fibre lasers represent a new and exciting field of research that brings together such diverse areas of science as laser physics, the theory of disordered systems, fibre optics and nonlinear science. Stable random generation in optical fibre opens up new possibilities for research on wave transport and localization in disordered media. We hope that this review will provide background information for research in various fields and will stimulate cross-disciplinary collaborations on random fibre lasers. © 2014 Elsevier B.V.
Developing a probabilistic graphical structure from a model of mental-health clinical risk expertise
Resumo:
This paper explores the process of developing a principled approach for translating a model of mental-health risk expertise into a probabilistic graphical structure. The Galatean Risk Screening Tool [1] is a psychological model for mental health risk assessment based on fuzzy sets. This paper details how the knowledge encapsulated in the psychological model was used to develop the structure of the probability graph by exploiting the semantics of the clinical expertise. These semantics are formalised by a detailed specification for an XML structure used to represent the expertise. The component parts were then mapped to equivalent probabilistic graphical structures such as Bayesian Belief Nets and Markov Random Fields to produce a composite chain graph that provides a probabilistic classification of risk expertise to complement the expert clinical judgements. © Springer-Verlag 2010.