906 resultados para Markov jump parameter
Resumo:
Monte Carlo algorithms often aim to draw from a distribution π by simulating a Markov chain with transition kernel P such that π is invariant under P. However, there are many situations for which it is impractical or impossible to draw from the transition kernel P. For instance, this is the case with massive datasets, where is it prohibitively expensive to calculate the likelihood and is also the case for intractable likelihood models arising from, for example, Gibbs random fields, such as those found in spatial statistics and network analysis. A natural approach in these cases is to replace P by an approximation Pˆ. Using theory from the stability of Markov chains we explore a variety of situations where it is possible to quantify how ’close’ the chain given by the transition kernel Pˆ is to the chain given by P . We apply these results to several examples from spatial statistics and network analysis.
Resumo:
The permeability parameter (C) for the movement of cephalosporin C across the outer membrane of Pseudomonas aeruginosa was measured using the widely accepted method of Zimmermann & Rosselet. In one experiment, the value of C varied continuously from 4·2 to 10·8 cm3 min-1 (mg dry wt cells)-1 over a range of concentrations of the test substrate, cephalosporin C, from 50 to 5 μm. Dependence of C on the concentration of test substrate was still observed when the effect of a possible electric potential difference across the outer membrane was corrected for. In quantitative studies of β-lactam permeation the dependence of C on the concentration of β-lactam should be taken into account.
Resumo:
The transition parameter is based on the electron characteristics close to the Earth's dayside magnetopause, but reveals systematic ordering of other, independent, data such as the ion flow, density and temperature and the rientation and strength of the magnetic field. Potentially, therefore, it is a very useful tool for resolving ambiguities in a sequence of satellite data caused by the effects of structure and motion of the boundary; however, its application has been limited because there has been no clear understanding of how it works. We present an analysis of data from the AMPTE-UKS satellite which shows that the transition parameter orders magnetopause data because magnetic reconnection generates newly-opened field lines which coat the boundary: a direct relationship is found with the time elapsed since the boundary-layer field line was opened. A simple model is used to reproduce this behaviour.
Resumo:
Existing methods of dive analysis, developed for fully aquatic animals, tend to focus on frequency of behaviors rather than transitions between them. They, therefore, do not account for the variability of behavior of semiaquatic animals, and the switching between terrestrial and aquatic environments. This is the first study to use hidden Markov models (HMM) to divide dives of a semiaquatic animal into clusters and thus identify the environmental predictors of transition between behavioral modes. We used 18 existing data sets of the dives of 14 American mink (Neovison vison) fitted with time-depth recorders in lowland England. Using HMM, we identified 3 behavioral states (1, temporal cluster of dives; 2, more loosely aggregated diving within aquatic activity; and 3, terminal dive of a cluster or a single, isolated dive). Based on the higher than expected proportion of dives in State 1, we conclude that mink tend to dive in clusters. We found no relationship between temperature and the proportion of dives in each state or between temperature and the rate of transition between states, meaning that in our study area, mink are apparently not adopting different diving strategies at different temperatures. Transition analysis between states has shown that there is no correlation between ambient temperature and the likelihood of mink switching from one state to another, that is, changing foraging modes. The variables provided good discrimination and grouped into consistent states well, indicating promise for further application of HMM and other state transition analyses in studies of semiaquatic animals.
Resumo:
Classical regression methods take vectors as covariates and estimate the corresponding vectors of regression parameters. When addressing regression problems on covariates of more complex form such as multi-dimensional arrays (i.e. tensors), traditional computational models can be severely compromised by ultrahigh dimensionality as well as complex structure. By exploiting the special structure of tensor covariates, the tensor regression model provides a promising solution to reduce the model’s dimensionality to a manageable level, thus leading to efficient estimation. Most of the existing tensor-based methods independently estimate each individual regression problem based on tensor decomposition which allows the simultaneous projections of an input tensor to more than one direction along each mode. As a matter of fact, multi-dimensional data are collected under the same or very similar conditions, so that data share some common latent components but can also have their own independent parameters for each regression task. Therefore, it is beneficial to analyse regression parameters among all the regressions in a linked way. In this paper, we propose a tensor regression model based on Tucker Decomposition, which identifies not only the common components of parameters across all the regression tasks, but also independent factors contributing to each particular regression task simultaneously. Under this paradigm, the number of independent parameters along each mode is constrained by a sparsity-preserving regulariser. Linked multiway parameter analysis and sparsity modeling further reduce the total number of parameters, with lower memory cost than their tensor-based counterparts. The effectiveness of the new method is demonstrated on real data sets.
Resumo:
The detection of physiological signals from the motor system (electromyographic signals) is being utilized in the practice clinic to guide the therapist in a more precise and accurate diagnosis of motor disorders. In this context, the process of decomposition of EMG (electromyographic) signals that includes the identification and classification of MUAP (Motor Unit Action Potential) of a EMG signal, is very important to help the therapist in the evaluation of motor disorders. The EMG decomposition is a complex task due to EMG features depend on the electrode type (needle or surface), its placement related to the muscle, the contraction level and the health of the Neuromuscular System. To date, the majority of researches on EMG decomposition utilize EMG signals acquired by needle electrodes, due to their advantages in processing this type of signal. However, relatively few researches have been conducted using surface EMG signals. Thus, this article aims to contribute to the clinical practice by presenting a technique that permit the decomposition of surface EMG signal via the use of Hidden Markov Models. This process is supported by the use of differential evolution and spectral clustering techniques. The developed system presented coherent results in: (1) identification of the number of Motor Units actives in the EMG signal; (2) presentation of the morphological patterns of MUAPs in the EMG signal; (3) identification of the firing sequence of the Motor Units. The model proposed in this work is an advance in the research area of decomposition of surface EMG signals.
Resumo:
In the Coupled Model Intercomparison Project Phase 5 (CMIP5), the model-mean increase in global mean surface air temperature T under the 1pctCO2 scenario (atmospheric CO2 increasing at 1% yr−1) during the second doubling of CO2 is 40% larger than the transient climate response (TCR), i.e. the increase in T during the first doubling. We identify four possible contributory effects. First, the surface climate system loses heat less readily into the ocean beneath as the latter warms. The model spread in the thermal coupling between the upper and deep ocean largely explains the model spread in ocean heat uptake efficiency. Second, CO2 radiative forcing may rise more rapidly than logarithmically with CO2 concentration. Third, the climate feedback parameter may decline as the CO2 concentration rises. With CMIP5 data, we cannot distinguish the second and third possibilities. Fourth, the climate feedback parameter declines as time passes or T rises; in 1pctCO2, this effect is less important than the others. We find that T projected for the end of the twenty-first century correlates more highly with T at the time of quadrupled CO2 in 1pctCO2 than with the TCR, and we suggest that the TCR may be underestimated from observed climate change.
Resumo:
Forensic taphonomy involves the use of decomposition to estimate postmortem interval (PMI) or locate clandestine graves. Yet, cadaver decomposition remains poorly understood, particularly following burial in soil. Presently, we do not know how most edaphic and environmental parameters, including soil moisture, influence the breakdown of cadavers following burial and alter the processes that are used to estimate PMI and locate clandestine graves. To address this, we buried juvenile rat (Rattus rattus) cadavers (∼18 g wet weight) in three contrasting soils from tropical savanna ecosystems located in Pallarenda (sand), Wambiana (medium clay), or Yabulu (loamy sand), Queensland, Australia. These soils were sieved (2 mm), weighed (500 g dry weight), calibrated to a matric potential of -0.01 megapascals (MPa), -0.05 MPa, or -0.3 MPa (wettest to driest) and incubated at 22 °C. Measurements of cadaver decomposition included cadaver mass loss, carbon dioxide-carbon (CO2-C) evolution, microbial biomass carbon (MBC), protease activity, phosphodiesterase activity, ninhydrin-reactive nitrogen (NRN) and soil pH. Cadaver burial resulted in a significant increase in CO2-C evolution, MBC, enzyme activities, NRN and soil pH. Cadaver decomposition in loamy sand and sandy soil was greater at lower matric potentials (wetter soil). However, optimal matric potential for cadaver decomposition in medium clay was exceeded, which resulted in a slower rate of cadaver decomposition in the wettest soil. Slower cadaver decomposition was also observed at high matric potential (-0.3 MPa). Furthermore, wet sandy soil was associated with greater cadaver decomposition than wet fine-textured soil. We conclude that gravesoil moisture content can modify the relationship between temperature and cadaver decomposition and that soil microorganisms can play a significant role in cadaver breakdown. We also conclude that soil NRN is a more reliable indicator of gravesoil than soil pH.
Resumo:
In this article, along with others, we take the position that the Null-Subject Parameter (NSP) (Chomsky 1981; Rizzi 1982) cluster of properties is narrower in scope than some originally contended. We test for the resetting of the NSP by English L2 learners of Spanish at the intermediate level, including poverty-of-the stimulus knowledge of the Overt Pronoun Constraint (Montalbetti 1984). Our participants are tested before and after five months' residency in Spain in an effort to see if increased amounts of native exposure are particularly beneficial for parameter resetting. Although we demonstrate NSP resetting for some of the L2 learners, our data essentially demonstrate that even with the advent of time/exposure to native input, there is no immediate gainful effect for NSP resetting.
Resumo:
In this paper, we study jumps in commodity prices. Unlike assumed in existing models of commodity price dynamics, a simple analysis of the data reveals that the probability of tail events is not constant but depends on the time of the year, i.e. exhibits seasonality. We propose a stochastic volatility jump–diffusion model to capture this seasonal variation. Applying the Markov Chain Monte Carlo (MCMC) methodology, we estimate our model using 20 years of futures data from four different commodity markets. We find strong statistical evidence to suggest that our model with seasonal jump intensity outperforms models featuring a constant jump intensity. To demonstrate the practical relevance of our findings, we show that our model typically improves Value-at-Risk (VaR) forecasts.
Resumo:
We present a novel algorithm for concurrent model state and parameter estimation in nonlinear dynamical systems. The new scheme uses ideas from three dimensional variational data assimilation (3D-Var) and the extended Kalman filter (EKF) together with the technique of state augmentation to estimate uncertain model parameters alongside the model state variables in a sequential filtering system. The method is relatively simple to implement and computationally inexpensive to run for large systems with relatively few parameters. We demonstrate the efficacy of the method via a series of identical twin experiments with three simple dynamical system models. The scheme is able to recover the parameter values to a good level of accuracy, even when observational data are noisy. We expect this new technique to be easily transferable to much larger models.
Resumo:
Models for which the likelihood function can be evaluated only up to a parameter-dependent unknown normalizing constant, such as Markov random field models, are used widely in computer science, statistical physics, spatial statistics, and network analysis. However, Bayesian analysis of these models using standard Monte Carlo methods is not possible due to the intractability of their likelihood functions. Several methods that permit exact, or close to exact, simulation from the posterior distribution have recently been developed. However, estimating the evidence and Bayes’ factors for these models remains challenging in general. This paper describes new random weight importance sampling and sequential Monte Carlo methods for estimating BFs that use simulation to circumvent the evaluation of the intractable likelihood, and compares them to existing methods. In some cases we observe an advantage in the use of biased weight estimates. An initial investigation into the theoretical and empirical properties of this class of methods is presented. Some support for the use of biased estimates is presented, but we advocate caution in the use of such estimates.
Resumo:
The co-polar correlation coefficient (ρhv) has many applications, including hydrometeor classification, ground clutter and melting layer identification, interpretation of ice microphysics and the retrieval of rain drop size distributions (DSDs). However, we currently lack the quantitative error estimates that are necessary if these applications are to be fully exploited. Previous error estimates of ρhv rely on knowledge of the unknown "true" ρhv and implicitly assume a Gaussian probability distribution function of ρhv samples. We show that frequency distributions of ρhv estimates are in fact highly negatively skewed. A new variable: L = -log10(1 - ρhv) is defined, which does have Gaussian error statistics, and a standard deviation depending only on the number of independent radar pulses. This is verified using observations of spherical drizzle drops, allowing, for the first time, the construction of rigorous confidence intervals in estimates of ρhv. In addition, we demonstrate how the imperfect co-location of the horizontal and vertical polarisation sample volumes may be accounted for. The possibility of using L to estimate the dispersion parameter (µ) in the gamma drop size distribution is investigated. We find that including drop oscillations is essential for this application, otherwise there could be biases in retrieved µ of up to ~8. Preliminary results in rainfall are presented. In a convective rain case study, our estimates show µ to be substantially larger than 0 (an exponential DSD). In this particular rain event, rain rate would be overestimated by up to 50% if a simple exponential DSD is assumed.
Resumo:
We analyze the risk premia embedded in the S&P 500 spot index and option markets. We use a long time-series of spot prices and a large panel of option prices to jointly estimate the diffusive stock risk premium, the price jump risk premium, the diffusive variance risk premium and the variance jump risk premium. The risk premia are statistically and economically significant and move over time. Investigating the economic drivers of the risk premia, we are able to explain up to 63 % of these variations.
Resumo:
The primary objective of this research study is to determine which form of testing, the PEST algorithm or an operator-controlled condition is most accurate and time efficient for administration of the gaze stabilization test