909 resultados para stochastic adding machines
Resumo:
This thesis is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variant of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here two new extended frameworks are derived and presented that are based on basis function expansions and local polynomial approximations of a recently proposed variational Bayesian algorithm. It is shown that the new extensions converge to the original variational algorithm and can be used for state estimation (smoothing). However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new methods are numerically validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein-Uhlenbeck process, for which the exact likelihood can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz '63 (3-dimensional model). The algorithms are also applied to the 40 dimensional stochastic Lorenz '96 system. In this investigation these new approaches are compared with a variety of other well known methods such as the ensemble Kalman filter / smoother, a hybrid Monte Carlo sampler, the dual unscented Kalman filter (for jointly estimating the systems states and model parameters) and full weak-constraint 4D-Var. Empirical analysis of their asymptotic behaviour as a function of observation density or length of time window increases is provided.
Resumo:
This work is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variation of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here a new extended framework is derived that is based on a local polynomial approximation of a recently proposed variational Bayesian algorithm. The paper begins by showing that the new extension of this variational algorithm can be used for state estimation (smoothing) and converges to the original algorithm. However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new approach is validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein–Uhlenbeck process, the exact likelihood of which can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz ’63 (3D model). As a special case the algorithm is also applied to the 40 dimensional stochastic Lorenz ’96 system. In our investigation we compare this new approach with a variety of other well known methods, such as the hybrid Monte Carlo, dual unscented Kalman filter, full weak-constraint 4D-Var algorithm and analyse empirically their asymptotic behaviour as a function of observation density or length of time window increases. In particular we show that we are able to estimate parameters in both the drift (deterministic) and the diffusion (stochastic) part of the model evolution equations using our new methods.
Resumo:
In Model-Driven Engineering (MDE), the developer creates a model using a language such as Unified Modeling Language (UML) or UML for Real-Time (UML-RT) and uses tools such as Papyrus or Papyrus-RT that generate code for them based on the model they create. Tracing allows developers to get insights such as which events occur and timing information into their own application as it runs. We try to add monitoring capabilities using Linux Trace Toolkit: next generation (LTTng) to models created in UML-RT using Papyrus-RT. The implementation requires changing the code generator to add tracing statements for the events that the user wants to monitor to the generated code. We also change the makefile to automate the build process and we create an Extensible Markup Language (XML) file that allows developers to view their traces visually using Trace Compass, an Eclipse-based trace viewing tool. Finally, we validate our results using three models we create and trace.
Resumo:
Many geological formations consist of crystalline rocks that have very low matrix permeability but allow flow through an interconnected network of fractures. Understanding the flow of groundwater through such rocks is important in considering disposal of radioactive waste in underground repositories. A specific area of interest is the conditioning of fracture transmissivities on measured values of pressure in these formations. This is the process where the values of fracture transmissivities in a model are adjusted to obtain a good fit of the calculated pressures to measured pressure values. While there are existing methods to condition transmissivity fields on transmissivity, pressure and flow measurements for a continuous porous medium there is little literature on conditioning fracture networks. Conditioning fracture transmissivities on pressure or flow values is a complex problem because the measurements are not linearly related to the fracture transmissivities and they are also dependent on all the fracture transmissivities in the network. We present a new method for conditioning fracture transmissivities on measured pressure values based on the calculation of certain basis vectors; each basis vector represents the change to the log transmissivity of the fractures in the network that results in a unit increase in the pressure at one measurement point whilst keeping the pressure at the remaining measurement points constant. The fracture transmissivities are updated by adding a linear combination of basis vectors and coefficients, where the coefficients are obtained by minimizing an error function. A mathematical summary of the method is given. This algorithm is implemented in the existing finite element code ConnectFlow developed and marketed by Serco Technical Services, which models groundwater flow in a fracture network. Results of the conditioning are shown for a number of simple test problems as well as for a realistic large scale test case.
Resumo:
Historically, domestic tasks such as preparing food and washing and drying clothes and dishes were done by hand. In a modern home many of these chores are taken care of by machines such as washing machines, dishwashers and tumble dryers. When the first such machines came on the market customers were happy that they worked at all! Today, the costs of electricity and customers’ environmental awareness are high, so features such as low electricity, water and detergent use strongly influence which household machine the customer will buy. One way to achieve lower electricity usage for the tumble dryer and the dishwasher is to add a heat pump system. The function of a heat pump system is to extract heat from a lower temperature source (heat source) and reject it to a higher temperature sink (heat sink) at a higher temperature level. Heat pump systems have been used for a long time in refrigerators and freezers, and that industry has driven the development of small, high quality, low price heat pump components. The low price of good quality heat pump components, along with an increased willingness to pay extra for lower electricity usage and environmental impact, make it possible to introduce heat pump systems in other household products. However, there is a high risk of failure with new features. A number of household manufacturers no longer exist because they introduced poorly implemented new features, which resulted in low quality and product performance. A manufacturer must predict whether the future value of a feature is high enough for the customer chain to pay for it. The challenge for the manufacturer is to develop and produce a high-performance heat pump feature in a household product with high quality, predict future willingness to pay for it, and launch it at the right moment in order to succeed. Tumble dryers with heat pump systems have been on the market since 2000. Paper I reports on the development of a transient simulation model of a commercial heat pump tumble dryer. The measured and simulated results were compared with good similarity. The influence of the size of the compressor and the condenser was investigated using the validated simulation model. The results from the simulation model show that increasing the cylinder volume of the compressor by 50% decreases the drying time by 14% without using more electricity. Paper II is a concept study of adding a heat pump system to a dishwasher in order to decrease the total electricity usage. The dishwasher, dishware and water are heated by the condenser, and the evaporator absorbs the heat from a water tank. The majority of the heat transfer to the evaporator occurs when ice is generated in the water tank. An experimental setup and a transient simulation model of a heat pump dishwasher were developed. The simulation results show a 24% reduction in electricity use compared to a conventional dishwasher heated with an electric element. The simulation model was based on an experimental setup that was not optimised. During the study it became apparent that it is possible to decrease electricity usage even more with the next experimental setup.
Resumo:
The aim of this study was to evaluate the viability of the use of spent laying hens' meat in the manufacturing of mortadella-type sausages with healthy appeal by using vegetable oil instead of animal fat. 120 Hy-line® layer hens were distributed in a completely randomized design into two treatments of six replicates with ten birds each. The treatments were birds from light Hy-line® W36 and semi-heavy Hy-line® Brown lines. Cold carcass, wing, breast and leg fillets yields were determined. Dry matter, protein, and lipid contents were determined in breast and leg fillets. The breast and legg fillets of three replicates per treatment were used to manufacture mortadella. After processing, sausages were evaluated for proximal composition, objective color, microbiological parameters, fatty acid profile and sensory acceptance. The meat of light and semi-heavy spent hens presented good yield and composition, allowing it to be used as raw material for the manufacture of processed products. Mortadellas were safe from microbiological point of view, and those made with semi-heavy hens fillets were redder and better accepted by consumers. Values for all sensory attributes were evaluated over score 5 (neither liked nor disliked). Both products presented high polyunsaturated fatty acid contents and good polyunsaturated to saturated fatty acid ratio. The excellent potential for the use of meat from spent layer hens of both varieties in the manufacturing of healthier mortadella-type sausage was demonstrated.
Resumo:
We have the purpose of analyzing the effect of explicit diffusion processes in a predator-prey stochastic lattice model. More precisely we wish to investigate the possible effects due to diffusion upon the thresholds of coexistence of species, i. e., the possible changes in the transition between the active state and the absorbing state devoid of predators. To accomplish this task we have performed time dependent simulations and dynamic mean-field approximations. Our results indicate that the diffusive process can enhance the species coexistence.
Resumo:
Consider N sites randomly and uniformly distributed in a d-dimensional hypercube. A walker explores this disordered medium going to the nearest site, which has not been visited in the last mu (memory) steps. The walker trajectory is composed of a transient part and a periodic part (cycle). For one-dimensional systems, travelers can or cannot explore all available space, giving rise to a crossover between localized and extended regimes at the critical memory mu(1) = log(2) N. The deterministic rule can be softened to consider more realistic situations with the inclusion of a stochastic parameter T (temperature). In this case, the walker movement is driven by a probability density function parameterized by T and a cost function. The cost function increases as the distance between two sites and favors hops to closer sites. As the temperature increases, the walker can escape from cycles that are reminiscent of the deterministic nature and extend the exploration. Here, we report an analytical model and numerical studies of the influence of the temperature and the critical memory in the exploration of one-dimensional disordered systems.
Resumo:
Objective To examine the ability of the criteria proposed by the WHO to identify pneumonia among cases presenting with wheezing and the extent to which adding fever to the criteria alters their performance. Design Prospective classification of 390 children aged 2-59 months with lower respiratory tract disease into five diagnostic categories, including pneumonia. WHO criteria for the identification of pneumonia and a set of such criteria modified by adding fever were compared with radio-graphically diagnosed pneumonia as the gold standard. Results The sensitivity of the WHO criteria was 94% for children aged <24 months and 62% for those aged >= 24 months. The corresponding specificities were 20% and 16%. Adding fever to the WHO criteria improved specificity substantially (to 44% and 50%, respectively). The specificity of the WHO criteria was poor for children with wheezing (12%). Adding fever improved this substantially (to 42%). The addition of fever to the criteria apparently reduced their sensitivity only marginally (to 92% and 57%, respectively, in the two age groups). Conclusion The authors' results reaffirm that the current WHO criteria can detect pneumonia with high sensitivity, particularly among younger children. They present evidence that the ability of these criteria to distinguish between children with pneumonia and those with wheezing diseases might be greatly enhanced by the addition of fever.
Resumo:
We present four estimators of the shared information (or interdepency) in ground states given that the coefficients appearing in the wave function are all real non-negative numbers and therefore can be interpreted as probabilities of configurations. Such ground states of Hermitian and non-Hermitian Hamiltonians can be given, for example, by superpositions of valence bond states which can describe equilibrium but also stationary states of stochastic models. We consider in detail the last case, the system being a classical not a quantum one. Using analytical and numerical methods we compare the values of the estimators in the directed polymer and the raise and peel models which have massive, conformal invariant and nonconformal invariant massless phases. We show that like in the case of the quantum problem, the estimators verify the area law with logarithmic corrections when phase transitions take place.
Resumo:
With each directed acyclic graph (this includes some D-dimensional lattices) one can associate some Abelian algebras that we call directed Abelian algebras (DAAs). On each site of the graph one attaches a generator of the algebra. These algebras depend on several parameters and are semisimple. Using any DAA, one can define a family of Hamiltonians which give the continuous time evolution of a stochastic process. The calculation of the spectra and ground-state wave functions (stationary state probability distributions) is an easy algebraic exercise. If one considers D-dimensional lattices and chooses Hamiltonians linear in the generators, in finite-size scaling the Hamiltonian spectrum is gapless with a critical dynamic exponent z=D. One possible application of the DAA is to sandpile models. In the paper we present this application, considering one- and two-dimensional lattices. In the one-dimensional case, when the DAA conserves the number of particles, the avalanches belong to the random walker universality class (critical exponent sigma(tau)=3/2). We study the local density of particles inside large avalanches, showing a depletion of particles at the source of the avalanche and an enrichment at its end. In two dimensions we did extensive Monte-Carlo simulations and found sigma(tau)=1.780 +/- 0.005.
Resumo:
We consider binary infinite order stochastic chains perturbed by a random noise. This means that at each time step, the value assumed by the chain can be randomly and independently flipped with a small fixed probability. We show that the transition probabilities of the perturbed chain are uniformly close to the corresponding transition probabilities of the original chain. As a consequence, in the case of stochastic chains with unbounded but otherwise finite variable length memory, we show that it is possible to recover the context tree of the original chain, using a suitable version of the algorithm Context, provided that the noise is small enough.
Resumo:
We study a general stochastic rumour model in which an ignorant individual has a certain probability of becoming a stifler immediately upon hearing the rumour. We refer to this special kind of stifler as an uninterested individual. Our model also includes distinct rates for meetings between two spreaders in which both become stiflers or only one does, so that particular cases are the classical Daley-Kendall and Maki-Thompson models. We prove a Law of Large Numbers and a Central Limit Theorem for the proportions of those who ultimately remain ignorant and those who have heard the rumour but become uninterested in it.
Resumo:
Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Age-related changes in running kinematics have been reported in the literature using classical inferential statistics. However, this approach has been hampered by the increased number of biomechanical gait variables reported and subsequently the lack of differences presented in these studies. Data mining techniques have been applied in recent biomedical studies to solve this problem using a more general approach. In the present work, we re-analyzed lower extremity running kinematic data of 17 young and 17 elderly male runners using the Support Vector Machine (SVM) classification approach. In total, 31 kinematic variables were extracted to train the classification algorithm and test the generalized performance. The results revealed different accuracy rates across three different kernel methods adopted in the classifier, with the linear kernel performing the best. A subsequent forward feature selection algorithm demonstrated that with only six features, the linear kernel SVM achieved 100% classification performance rate, showing that these features provided powerful combined information to distinguish age groups. The results of the present work demonstrate potential in applying this approach to improve knowledge about the age-related differences in running gait biomechanics and encourages the use of the SVM in other clinical contexts. (C) 2010 Elsevier Ltd. All rights reserved.