888 resultados para Discrete Time Branching Processes
Resumo:
We define a copula process which describes the dependencies between arbitrarily many random variables independently of their marginal distributions. As an example, we develop a stochastic volatility model, Gaussian Copula Process Volatility (GCPV), to predict the latent standard deviations of a sequence of random variables. To make predictions we use Bayesian inference, with the Laplace approximation, and with Markov chain Monte Carlo as an alternative. We find both methods comparable. We also find our model can outperform GARCH on simulated and financial data. And unlike GARCH, GCPV can easily handle missing data, incorporate covariates other than time, and model a rich class of covariance structures.
Resumo:
We introduce a stochastic process with Wishart marginals: the generalised Wishart process (GWP). It is a collection of positive semi-definite random matrices indexed by any arbitrary dependent variable. We use it to model dynamic (e.g. time varying) covariance matrices. Unlike existing models, it can capture a diverse class of covariance structures, it can easily handle missing data, the dependent variable can readily include covariates other than time, and it scales well with dimension; there is no need for free parameters, and optional parameters are easy to interpret. We describe how to construct the GWP, introduce general procedures for inference and predictions, and show that it outperforms its main competitor, multivariate GARCH, even on financial data that especially suits GARCH. We also show how to predict the mean of a multivariate process while accounting for dynamic correlations.
Resumo:
Given a spectral density matrix or, equivalently, a real autocovariance sequence, the author seeks to determine a finite-dimensional linear time-invariant system which, when driven by white noise, will produce an output whose spectral density is approximately PHI ( omega ), and an approximate spectral factor of PHI ( omega ). The author employs the Anderson-Faurre theory in his analysis.
Resumo:
This paper considers the effect of the rotor tip on the casing heat load of a transonic axial flow turbine. The aim of the research is to understand the dominant causes of casing heat-transfer. Experimental measurements were conducted at engine-representative Mach number, Reynolds number and stage inlet to casing wall temperature ratio. Time-resolved heat-transfer coefficient and gas recovery temperature on the casing were measured using an array of heat-transfer gauges. Time-resolved static pressure on the casing wall was measured using Kulite pressure transducers. Time-resolved numerical simulations were undertaken to aid understanding of the mechanism responsible for casing heat load. The results show that between 35% and 60% axial chord the rotor tip-leakage flow is responsible for more than 50% of casing heat transfer. The effects of both gas recovery temperature and heat transfer coefficient were investigated separately and it is shown that an increased stagnation temperature in the rotor tip gap dominates casing heat-transfer. In the tip gap the stagnation temperature is shown to rise above that found at stage inlet (combustor exit) by as much as 35% of stage total temperature drop. The rise in stagnation temperature is caused by an isentropic work input to the tip-leakage fluid by the rotor. The size of this mechanism is investigated by computationally tracking fluid path-lines through the rotor tip gap to understand the unsteady work processes that occur. Copyright © 2005 by ASME.
Resumo:
This paper discusses the application of Discrete Event Simulation (DES) in modelling the complex relationship between patient types, case-mix and operating theatre allocation in a large National Health Service (NHS) Trust in London. The simulation model that was constructed described the main features of nine theatres, focusing on operational processes and patient throughput times. The model was used to test three scenarios of case-mix and to demonstrate the potential of using simulation modelling as a cost effective method for understanding the issues of healthcare operations management and the role of simulation techniques in problem solving. The results indicated that removing all day cases will reduce patient throughput by 23.3% and the utilization of the orthopaedic theatre in particular by 6.5%. This represents a case example of how DES can be used by healthcare managers to inform decision making. © 2008 IEEE.
Resumo:
In this paper we present Poisson sum series representations for α-stable (αS) random variables and a-stable processes, in particular concentrating on continuous-time autoregressive (CAR) models driven by α-stable Lévy processes. Our representations aim to provide a conditionally Gaussian framework, which will allow parameter estimation using Rao-Blackwellised versions of state of the art Bayesian computational methods such as particle filters and Markov chain Monte Carlo (MCMC). To overcome the issues due to truncation of the series, novel residual approximations are developed. Simulations demonstrate the potential of these Poisson sum representations for inference in otherwise intractable α-stable models. © 2011 IEEE.
Resumo:
Simulated annealing is a popular method for approaching the solution of a global optimization problem. Existing results on its performance apply to discrete combinatorial optimization where the optimization variables can assume only a finite set of possible values. We introduce a new general formulation of simulated annealing which allows one to guarantee finite-time performance in the optimization of functions of continuous variables. The results hold universally for any optimization problem on a bounded domain and establish a connection between simulated annealing and up-to-date theory of convergence of Markov chain Monte Carlo methods on continuous domains. This work is inspired by the concept of finite-time learning with known accuracy and confidence developed in statistical learning theory.
Resumo:
A novel test method for the characterisation of flexible forming processes is proposed and applied to four flexible forming processes: Incremental Sheet Forming (ISF), conventional spinning, the English wheel and power hammer. The proposed method is developed in analogy with time-domain control engineering, where a system is characterised by its impulse response. The spatial impulse response is used to characterise the change in workpiece deformation created by a process, but has also been applied with a strain spectrogram, as a novel way to characterise a process and the physical effect it has on the workpiece. Physical and numerical trials to study the effects of process and material parameters on spatial impulse response lead to three main conclusions. Incremental sheet forming is particularly sensitive to process parameters. The English wheel and power hammer are strongly similar and largely insensitive to both process and material parameters. Spinning develops in two stages and is sensitive to most process parameters, but insensitive to prior deformation. Finally, the proposed method could be applied to modelling, classification of existing and novel processes, product-process matching and closed-loop control of flexible forming processes. © 2012 Elsevier B.V.
Resumo:
A case study of an aircraft engine manufacturer is used to analyze the effects of management levers on the lead time and design errors generated in an iteration-intensive concurrent engineering process. The levers considered are amount of design-space exploration iteration, degree of process concurrency, and timing of design reviews. Simulation is used to show how the ideal combination of these levers can vary with changes in design problem complexity, which can increase, for instance, when novel technology is incorporated in a design. Results confirm that it is important to consider multiple iteration-influencing factors and their interdependencies to understand concurrent processes, because the factors can interact with confounding effects. The article also demonstrates a new approach to derive a system dynamics model from a process task network. The new approach could be applied to analyze other concurrent engineering scenarios. © The Author(s) 2012.
Resumo:
Decisions about noisy stimuli require evidence integration over time. Traditionally, evidence integration and decision making are described as a one-stage process: a decision is made when evidence for the presence of a stimulus crosses a threshold. Here, we show that one-stage models cannot explain psychophysical experiments on feature fusion, where two visual stimuli are presented in rapid succession. Paradoxically, the second stimulus biases decisions more strongly than the first one, contrary to predictions of one-stage models and intuition. We present a two-stage model where sensory information is integrated and buffered before it is fed into a drift diffusion process. The model is tested in a series of psychophysical experiments and explains both accuracy and reaction time distributions. © 2012 Rüter et al.
Resumo:
This work shows how a dialogue model can be represented as a Partially Observable Markov Decision Process (POMDP) with observations composed of a discrete and continuous component. The continuous component enables the model to directly incorporate a confidence score for automated planning. Using a testbed simulated dialogue management problem, we show how recent optimization techniques are able to find a policy for this continuous POMDP which outperforms a traditional MDP approach. Further, we present a method for automatically improving handcrafted dialogue managers by incorporating POMDP belief state monitoring, including confidence score information. Experiments on the testbed system show significant improvements for several example handcrafted dialogue managers across a range of operating conditions.
Resumo:
Analyses of crack growth under cyclic loading conditions are discussed where plastic flow arises from the motion of large numbers of discrete dislocations and the fracture properties are embedded in a cohesive surface constitutive relation. The formulation is the same as used to analyse crack growth under monotonic loading conditions, differing only in the remote loading being a cyclic function of time. Fatigue, i.e. crack growth in cyclic loading at a driving force for which the crack would have arrested under monotonic loading, emerges in the simulations as a consequence of the evolution of internal stresses associated with the irreversibility of the dislocation motion. A fatigue threshold, Paris law behaviour, striations, the accelerated growth of short cracks and the scaling with material properties are outcomes of the calculations. Results for single crystals and polycrystals will be discussed.
Resumo:
Natural sounds are structured on many time-scales. A typical segment of speech, for example, contains features that span four orders of magnitude: Sentences ($\sim1$s); phonemes ($\sim10$−$1$ s); glottal pulses ($\sim 10$−$2$s); and formants ($\sim 10$−$3$s). The auditory system uses information from each of these time-scales to solve complicated tasks such as auditory scene analysis [1]. One route toward understanding how auditory processing accomplishes this analysis is to build neuroscience-inspired algorithms which solve similar tasks and to compare the properties of these algorithms with properties of auditory processing. There is however a discord: Current machine-audition algorithms largely concentrate on the shorter time-scale structures in sounds, and the longer structures are ignored. The reason for this is two-fold. Firstly, it is a difficult technical problem to construct an algorithm that utilises both sorts of information. Secondly, it is computationally demanding to simultaneously process data both at high resolution (to extract short temporal information) and for long duration (to extract long temporal information). The contribution of this work is to develop a new statistical model for natural sounds that captures structure across a wide range of time-scales, and to provide efficient learning and inference algorithms. We demonstrate the success of this approach on a missing data task.
Resumo:
The fundamental aim of clustering algorithms is to partition data points. We consider tasks where the discovered partition is allowed to vary with some covariate such as space or time. One approach would be to use fragmentation-coagulation processes, but these, being Markov processes, are restricted to linear or tree structured covariate spaces. We define a partition-valued process on an arbitrary covariate space using Gaussian processes. We use the process to construct a multitask clustering model which partitions datapoints in a similar way across multiple data sources, and a time series model of network data which allows cluster assignments to vary over time. We describe sampling algorithms for inference and apply our method to defining cancer subtypes based on different types of cellular characteristics, finding regulatory modules from gene expression data from multiple human populations, and discovering time varying community structure in a social network.
Resumo:
Animals repeat rewarded behaviors, but the physiological basis of reward-based learning has only been partially elucidated. On one hand, experimental evidence shows that the neuromodulator dopamine carries information about rewards and affects synaptic plasticity. On the other hand, the theory of reinforcement learning provides a framework for reward-based learning. Recent models of reward-modulated spike-timing-dependent plasticity have made first steps towards bridging the gap between the two approaches, but faced two problems. First, reinforcement learning is typically formulated in a discrete framework, ill-adapted to the description of natural situations. Second, biologically plausible models of reward-modulated spike-timing-dependent plasticity require precise calculation of the reward prediction error, yet it remains to be shown how this can be computed by neurons. Here we propose a solution to these problems by extending the continuous temporal difference (TD) learning of Doya (2000) to the case of spiking neurons in an actor-critic network operating in continuous time, and with continuous state and action representations. In our model, the critic learns to predict expected future rewards in real time. Its activity, together with actual rewards, conditions the delivery of a neuromodulatory TD signal to itself and to the actor, which is responsible for action choice. In simulations, we show that such an architecture can solve a Morris water-maze-like navigation task, in a number of trials consistent with reported animal performance. We also use our model to solve the acrobot and the cartpole problems, two complex motor control tasks. Our model provides a plausible way of computing reward prediction error in the brain. Moreover, the analytically derived learning rule is consistent with experimental evidence for dopamine-modulated spike-timing-dependent plasticity.