994 resultados para Discrete Observations


Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this treatise we consider finite systems of branching particles where the particles move independently of each other according to d-dimensional diffusions. Particles are killed at a position dependent rate, leaving at their death position a random number of descendants according to a position dependent reproduction law. In addition particles immigrate at constant rate (one immigrant per immigration time). A process with above properties is called a branching diffusion withimmigration (BDI). In the first part we present the model in detail and discuss the properties of the BDI under our basic assumptions. In the second part we consider the problem of reconstruction of the trajectory of a BDI from discrete observations. We observe positions of the particles at discrete times; in particular we assume that we have no information about the pedigree of the particles. A natural question arises if we want to apply statistical procedures on the discrete observations: How can we find couples of particle positions which belong to the same particle? We give an easy to implement 'reconstruction scheme' which allows us to redraw or 'reconstruct' parts of the trajectory of the BDI with high accuracy. Moreover asymptotically the whole path can be reconstructed. Further we present simulations which show that our partial reconstruction rule is tractable in practice. In the third part we study how the partial reconstruction rule fits into statistical applications. As an extensive example we present a nonparametric estimator for the diffusion coefficient of a BDI where the particles move according to one-dimensional diffusions. This estimator is based on the Nadaraya-Watson estimator for the diffusion coefficient of one-dimensional diffusions and it uses the partial reconstruction rule developed in the second part above. We are able to prove a rate of convergence of this estimator and finally we present simulations which show that the estimator works well even if we leave our set of assumptions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In dieser Arbeit geht es um die Schätzung von Parametern in zeitdiskreten ergodischen Markov-Prozessen im allgemeinen und im CIR-Modell im besonderen. Beim CIR-Modell handelt es sich um eine stochastische Differentialgleichung, die von Cox, Ingersoll und Ross (1985) zur Beschreibung der Dynamik von Zinsraten vorgeschlagen wurde. Problemstellung ist die Schätzung der Parameter des Drift- und des Diffusionskoeffizienten aufgrund von äquidistanten diskreten Beobachtungen des CIR-Prozesses. Nach einer kurzen Einführung in das CIR-Modell verwenden wir die insbesondere von Bibby und Sørensen untersuchte Methode der Martingal-Schätzfunktionen und -Schätzgleichungen, um das Problem der Parameterschätzung in ergodischen Markov-Prozessen zunächst ganz allgemein zu untersuchen. Im Anschluss an Untersuchungen von Sørensen (1999) werden hinreichende Bedingungen (im Sinne von Regularitätsvoraussetzungen an die Schätzfunktion) für die Existenz, starke Konsistenz und asymptotische Normalität von Lösungen einer Martingal-Schätzgleichung angegeben. Angewandt auf den Spezialfall der Likelihood-Schätzung stellen diese Bedingungen zugleich lokal-asymptotische Normalität des Modells sicher. Ferner wird ein einfaches Kriterium für Godambe-Heyde-Optimalität von Schätzfunktionen angegeben und skizziert, wie dies in wichtigen Spezialfällen zur expliziten Konstruktion optimaler Schätzfunktionen verwendet werden kann. Die allgemeinen Resultate werden anschließend auf das diskretisierte CIR-Modell angewendet. Wir analysieren einige von Overbeck und Rydén (1997) vorgeschlagene Schätzer für den Drift- und den Diffusionskoeffizienten, welche als Lösungen quadratischer Martingal-Schätzfunktionen definiert sind, und berechnen das optimale Element in dieser Klasse. Abschließend verallgemeinern wir Ergebnisse von Overbeck und Rydén (1997), indem wir die Existenz einer stark konsistenten und asymptotisch normalen Lösung der Likelihood-Gleichung zeigen und lokal-asymptotische Normalität für das CIR-Modell ohne Einschränkungen an den Parameterraum beweisen.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A generic method for the estimation of parameters for Stochastic Ordinary Differential Equations (SODEs) is introduced and developed. This algorithm, called the GePERs method, utilises a genetic optimisation algorithm to minimise a stochastic objective function based on the Kolmogorov-Smirnov statistic. Numerical simulations are utilised to form the KS statistic. Further, the examination of some of the factors that improve the precision of the estimates is conducted. This method is used to estimate parameters of diffusion equations and jump-diffusion equations. It is also applied to the problem of model selection for the Queensland electricity market. (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The challenge of detecting a change in the distribution of data is a sequential decision problem that is relevant to many engineering solutions, including quality control and machine and process monitoring. This dissertation develops techniques for exact solution of change-detection problems with discrete time and discrete observations. Change-detection problems are classified as Bayes or minimax based on the availability of information on the change-time distribution. A Bayes optimal solution uses prior information about the distribution of the change time to minimize the expected cost, whereas a minimax optimal solution minimizes the cost under the worst-case change-time distribution. Both types of problems are addressed. The most important result of the dissertation is the development of a polynomial-time algorithm for the solution of important classes of Markov Bayes change-detection problems. Existing techniques for epsilon-exact solution of partially observable Markov decision processes have complexity exponential in the number of observation symbols. A new algorithm, called constellation induction, exploits the concavity and Lipschitz continuity of the value function, and has complexity polynomial in the number of observation symbols. It is shown that change-detection problems with a geometric change-time distribution and identically- and independently-distributed observations before and after the change are solvable in polynomial time. Also, change-detection problems on hidden Markov models with a fixed number of recurrent states are solvable in polynomial time. A detailed implementation and analysis of the constellation-induction algorithm are provided. Exact solution methods are also established for several types of minimax change-detection problems. Finite-horizon problems with arbitrary observation distributions are modeled as extensive-form games and solved using linear programs. Infinite-horizon problems with linear penalty for detection delay and identically- and independently-distributed observations can be solved in polynomial time via epsilon-optimal parameterization of a cumulative-sum procedure. Finally, the properties of policies for change-detection problems are described and analyzed. Simple classes of formal languages are shown to be sufficient for epsilon-exact solution of change-detection problems, and methods for finding minimally sized policy representations are described.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We define a pair-correlation function that can be used to characterize spatiotemporal patterning in experimental images and snapshots from discrete simulations. Unlike previous pair-correlation functions, the pair-correlation functions developed here depend on the location and size of objects. The pair-correlation function can be used to indicate complete spatial randomness, aggregation or segregation over a range of length scales, and quantifies spatial structures such as the shape, size and distribution of clusters. Comparing pair-correlation data for various experimental and simulation images illustrates their potential use as a summary statistic for calibrating discrete models of various physical processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a method, based on polychotomous discrete choice methods, to impute a continuous measure of income when only a bracketed measure of income is available and for only a subset of the obsevations. The method is shown to perform well with CP5 data. © 1991.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of recovering information from measurement data has already been studied for a long time. In the beginning, the methods were mostly empirical, but already towards the end of the sixties Backus and Gilbert started the development of mathematical methods for the interpretation of geophysical data. The problem of recovering information about a physical phenomenon from measurement data is an inverse problem. Throughout this work, the statistical inversion method is used to obtain a solution. Assuming that the measurement vector is a realization of fractional Brownian motion, the goal is to retrieve the amplitude and the Hurst parameter. We prove that under some conditions, the solution of the discretized problem coincides with the solution of the corresponding continuous problem as the number of observations tends to infinity. The measurement data is usually noisy, and we assume the data to be the sum of two vectors: the trend and the noise. Both vectors are supposed to be realizations of fractional Brownian motions, and the goal is to retrieve their parameters using the statistical inversion method. We prove a partial uniqueness of the solution. Moreover, with the support of numerical simulations, we show that in certain cases the solution is reliable and the reconstruction of the trend vector is quite accurate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The critical excavation depth of a jointed rock slope is an important problem in rock engineering. This paper studies the critical excavation depth for two idealized jointed rock slopes by employing a face-to-face discrete element method (DEM). The DEM is based on the discontinuity analysis which can consider anisotropic and discontinuous deformations due to joints and their orientations. It uses four lump-points at each surface of rock blocks to describe their interactions. The relationship between the critical excavation depth D-s and the natural slope angle alpha, the joint inclination angle theta as well as the strength parameters of the joints c(r) ,phi(r) is analyzed, and the critical excavation depth obtained with this DEM and the limit equilibrium method (LEM) is compared. Furthermore, effects of joints on the failure modes are compared between DEM simulations and experimental observations. It is found that the DEM predicts a lower critical excavation depth than the LEM if the joint structures in the rock mass are not ignored.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first thesis topic is a perturbation method for resonantly coupled nonlinear oscillators. By successive near-identity transformations of the original equations, one obtains new equations with simple structure that describe the long time evolution of the motion. This technique is related to two-timing in that secular terms are suppressed in the transformation equations. The method has some important advantages. Appropriate time scalings are generated naturally by the method, and don't need to be guessed as in two-timing. Furthermore, by continuing the procedure to higher order, one extends (formally) the time scale of valid approximation. Examples illustrate these claims. Using this method, we investigate resonance in conservative, non-conservative and time dependent problems. Each example is chosen to highlight a certain aspect of the method.

The second thesis topic concerns the coupling of nonlinear chemical oscillators. The first problem is the propagation of chemical waves of an oscillating reaction in a diffusive medium. Using two-timing, we derive a nonlinear equation that determines how spatial variations in the phase of the oscillations evolves in time. This result is the key to understanding the propagation of chemical waves. In particular, we use it to account for certain experimental observations on the Belusov-Zhabotinskii reaction.

Next, we analyse the interaction between a pair of coupled chemical oscillators. This time, we derive an equation for the phase shift, which measures how much the oscillators are out of phase. This result is the key to understanding M. Marek's and I. Stuchl's results on coupled reactor systems. In particular, our model accounts for synchronization and its bifurcation into rhythm splitting.

Finally, we analyse large systems of coupled chemical oscillators. Using a continuum approximation, we demonstrate mechanisms that cause auto-synchronization in such systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We approach the problem of automatically modeling a mechanical system from data about its dynamics, using a method motivated by variational integrators. We write the discrete Lagrangian as a quadratic polynomial with varying coefficients, and then use the discrete Euler-Lagrange equations to numerically solve for the values of these coefficients near the data points. This method correctly modeled the Lagrangian of a simple harmonic oscillator and a simple pendulum, even with significant measurement noise added to the trajectories.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work shows how a dialogue model can be represented as a Partially Observable Markov Decision Process (POMDP) with observations composed of a discrete and continuous component. The continuous component enables the model to directly incorporate a confidence score for automated planning. Using a testbed simulated dialogue management problem, we show how recent optimization techniques are able to find a policy for this continuous POMDP which outperforms a traditional MDP approach. Further, we present a method for automatically improving handcrafted dialogue managers by incorporating POMDP belief state monitoring, including confidence score information. Experiments on the testbed system show significant improvements for several example handcrafted dialogue managers across a range of operating conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many of the challenges faced in health care delivery can be informed through building models. In particular, Discrete Conditional Survival (DCS) models, recently under development, can provide policymakers with a flexible tool to assess time-to-event data. The DCS model is capable of modelling the survival curve based on various underlying distribution types and is capable of clustering or grouping observations (based on other covariate information) external to the distribution fits. The flexibility of the model comes through the choice of data mining techniques that are available in ascertaining the different subsets and also in the choice of distribution types available in modelling these informed subsets. This paper presents an illustrated example of the Discrete Conditional Survival model being deployed to represent ambulance response-times by a fully parameterised model. This model is contrasted against use of a parametric accelerated failure-time model, illustrating the strength and usefulness of Discrete Conditional Survival models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, the behaviour of iron ore fines with varying levels of adhesion was investigated using a confined compression test and a uniaxial test. The uniaxial test was conducted using the semi-automated uniaxial EPT tester in which the cohesive strength of a bulk solid is evaluated from an unconfined compression test following a period of consolidation to a pre-defined vertical stress. The iron ore fines were also tested by measuring both the vertical and circumferential strains on the cylindrical container walls under vertical loading in a separate confined compression tester - the K0 tester, to determine the lateral pressure ratio. Discrete Element Method simulations of both experiments were carried out and the predictions were compared with the experimental observations. A recently developed DEM contact model for cohesive solids, an Elasto-Plastic Adhesive model, was used. This particle contact model uses hysteretic non-linear loading and unloading paths and an adhesion parameter which is a function of the maximum contact overlap. The model parameters for the simulations are phenomenologically based to reproduce the key bulk characteristics exhibited by the solid. The simulation results show a good agreement in capturing the stress history dependent behaviour depicted by the flow function of the cohesive iron ore fines while also providing a reasonably good match for the lateral pressure ratio observed during the confined compression K0 tests. This demonstrates the potential for the DEM model to be used in the simulation of bulk handling applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present combined observations made near midnight by the EISCAT radar, all-sky cameras and the combined released and radiation efects satellite (CRRES) shortly before and during a substorm. In particular, we study a discrete, equatorward-drifting auroral arc, seen several degrees poleward of the onset region. The arc passes through the field-aligned beam of the EISCAT radar and is seen to be associated with a considerable upflow of ionospheric plasma. During the substorm, the CRRES satellite observed two major injections, 17 min apart, the second of which was dominated by O+ ions. We show that the observed are was in a suitable location in both latitude and MLT to have fed O+ ions into the second injection and that the upward flux of ions associated with it was sufficient to explain the observed injection. We interpret these data as showing that arcs in the nightside plasma-sheet boundary layer could be the source of O+ ions energised by a dipolarisation of the mid- and near-Earth tail, as opposed to ions ejected from the dayside ionosphere in the cleft ion fountain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A coordinated ground-based observational campaign using the IMAGE magnetometer network, EISCAT radars and optical instruments on Svalbard has made possible detailed studies of a travelling convection vortices (TCV) event on 6 January 1992. Combining the data from these facilities allows us to draw a very detailed picture of the features and dynamics of this TCV event. On the way from the noon to the drawn meridian, the vortices went through a remarkable development. The propagation velocity in the ionosphere increased from 2.5 to 7.4 km s−1, and the orientation of the major axes of the vortices rotated from being almost parallel to the magnetic meridian near noon to essentially perpendicular at dawn. By combining electric fields obtained by EISCAT and ionospheric currents deduced from magnetic field recordings, conductivities associated with the vortices could be estimated. Contrary to expectations we found higher conductivities below the downward field aligned current (FAC) filament than below the upward directed. Unexpected results also emerged from the optical observations. For most of the time there were no discrete aurora at 557.7 nm associated with the TCVs. Only once did a discrete form appear at the foot of the upward FAC. This aurora subsequently expanded eastward and westward leaving its centre at the same longitude while the TCV continued to travel westward. Also we try to identify the source regions of TCVs in the magnetosphere and discuss possible generation mechanisms.