11 resultados para stochastic process
em CentAUR: Central Archive University of Reading - UK
Resumo:
Commonly used repair rate models for repairable systems in the reliability literature are renewal processes, generalised renewal processes or non-homogeneous Poisson processes. In addition to these models, geometric processes (GP) are studied occasionally. The GP, however, can only model systems with monotonously changing (increasing, decreasing or constant) failure intensities. This paper deals with the reliability modelling of failure processes for repairable systems where the failure intensity shows a bathtub-type non-monotonic behaviour. A new stochastic process, i.e. an extended Poisson process, is introduced in this paper. Reliability indices are presented, and the parameters of the new process are estimated. Experimental results on a data set demonstrate the validity of the new process.
Resumo:
The basic repair rate models for repairable systems may be homogeneous Poisson processes, renewal processes or nonhomogeneous Poisson processes. In addition to these models, geometric processes are studied occasionally. Geometric processes, however, can only model systems with monotonously changing (increasing, decreasing or constant) failure intensity. This paper deals with the reliability modelling of the failure process of repairable systems when the failure intensity shows a bathtub type non-monotonic behaviour. A new stochastic process, an extended Poisson process, is introduced. Reliability indices and parameter estimation are presented. A comparison of this model with other repair models based on a dataset is made.
Resumo:
The Web's link structure (termed the Web Graph) is a richly connected set of Web pages. Current applications use this graph for indexing and information retrieval purposes. In contrast the relationship between Web Graph and application is reversed by letting the structure of the Web Graph influence the behaviour of an application. Presents a novel Web crawling agent, AlienBot, the output of which is orthogonally coupled to the enemy generation strategy of a computer game. The Web Graph guides AlienBot, causing it to generate a stochastic process. Shows the effectiveness of such unorthodox coupling to both the playability of the game and the heuristics of the Web crawler. In addition, presents the results of the sample of Web pages collected by the crawling process. In particular, shows: how AlienBot was able to identify the power law inherent in the link structure of the Web; that 61.74 per cent of Web pages use some form of scripting technology; that the size of the Web can be estimated at just over 5.2 billion pages; and that less than 7 per cent of Web pages fully comply with some variant of (X)HTML.
Resumo:
The detection of long-range dependence in time series analysis is an important task to which this paper contributes by showing that whilst the theoretical definition of a long-memory (or long-range dependent) process is based on the autocorrelation function, it is not possible for long memory to be identified using the sum of the sample autocorrelations, as usually defined. The reason for this is that the sample sum is a predetermined constant for any stationary time series; a result that is independent of the sample size. Diagnostic or estimation procedures, such as those in the frequency domain, that embed this sum are equally open to this criticism. We develop this result in the context of long memory, extending it to the implications for the spectral density function and the variance of partial sums of a stationary stochastic process. The results are further extended to higher order sample autocorrelations and the bispectral density. The corresponding result is that the sum of the third order sample (auto) bicorrelations at lags h,k≥1, is also a predetermined constant, different from that in the second order case, for any stationary time series of arbitrary length.
Resumo:
Despite the many models developed for phosphorus concentration prediction at differing spatial and temporal scales, there has been little effort to quantify uncertainty in their predictions. Model prediction uncertainty quantification is desirable, for informed decision-making in river-systems management. An uncertainty analysis of the process-based model, integrated catchment model of phosphorus (INCA-P), within the generalised likelihood uncertainty estimation (GLUE) framework is presented. The framework is applied to the Lugg catchment (1,077 km2), a River Wye tributary, on the England–Wales border. Daily discharge and monthly phosphorus (total reactive and total), for a limited number of reaches, are used to initially assess uncertainty and sensitivity of 44 model parameters, identified as being most important for discharge and phosphorus predictions. This study demonstrates that parameter homogeneity assumptions (spatial heterogeneity is treated as land use type fractional areas) can achieve higher model fits, than a previous expertly calibrated parameter set. The model is capable of reproducing the hydrology, but a threshold Nash-Sutcliffe co-efficient of determination (E or R 2) of 0.3 is not achieved when simulating observed total phosphorus (TP) data in the upland reaches or total reactive phosphorus (TRP) in any reach. Despite this, the model reproduces the general dynamics of TP and TRP, in point source dominated lower reaches. This paper discusses why this application of INCA-P fails to find any parameter sets, which simultaneously describe all observed data acceptably. The discussion focuses on uncertainty of readily available input data, and whether such process-based models should be used when there isn’t sufficient data to support the many parameters.
Resumo:
An information processing paradigm in the brain is proposed, instantiated in an artificial neural network using biologically motivated temporal encoding. The network will locate within the external world stimulus, the target memory, defined by a specific pattern of micro-features. The proposed network is robust and efficient. Akin in operation to the swarm intelligence paradigm, stochastic diffusion search, it will find the best-fit to the memory with linear time complexity. information multiplexing enables neurons to process knowledge as 'tokens' rather than 'types'. The network illustrates possible emergence of cognitive processing from low level interactions such as memory retrieval based on partial matching. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Major research on equity index dynamics has investigated only US indices (usually the S&P 500) and has provided contradictory results. In this paper a clarification and extension of that previous research is given. We find that European equity indices have quite different dynamics from the S&P 500. Each of the European indices considered may be satisfactorily modelled using either an affine model with price and volatility jumps or a GARCH volatility process without jumps. The S&P 500 dynamics are much more difficult to capture in a jump-diffusion framework.
Resumo:
We consider the relation between so called continuous localization models—i.e. non-linear stochastic Schrödinger evolutions—and the discrete GRW-model of wave function collapse. The former can be understood as scaling limit of the GRW process. The proof relies on a stochastic Trotter formula, which is of interest in its own right. Our Trotter formula also allows to complement results on existence theory of stochastic Schrödinger evolutions by Holevo and Mora/Rebolledo.
Resumo:
We analyze the large time behavior of a stochastic model for the lay down of fibers on a moving conveyor belt in the production process of nonwovens. It is shown that under weak conditions this degenerate diffusion process has a unique invariant distribution and is even geometrically ergodic. This generalizes results from previous works [M. Grothaus and A. Klar, SIAM J. Math. Anal., 40 (2008), pp. 968–983; J. Dolbeault et al., arXiv:1201.2156] concerning the case of a stationary conveyor belt, in which the situation of a moving conveyor belt has been left open.