953 resultados para Time series. Transfer function. Recursive Estimation. Plunger lift. Gas flow.
Resumo:
We conducted an in-situ X-ray micro-computed tomography heating experiment at the Advanced Photon Source (USA) to dehydrate an unconfined 2.3 mm diameter cylinder of Volterra Gypsum. We used a purpose-built X-ray transparent furnace to heat the sample to 388 K for a total of 310 min to acquire a three-dimensional time-series tomography dataset comprising nine time steps. The voxel size of 2.2 μm3 proved sufficient to pinpoint reaction initiation and the organization of drainage architecture in space and time. We observed that dehydration commences across a narrow front, which propagates from the margins to the centre of the sample in more than four hours. The advance of this front can be fitted with a square-root function, implying that the initiation of the reaction in the sample can be described as a diffusion process. Novel parallelized computer codes allow quantifying the geometry of the porosity and the drainage architecture from the very large tomographic datasets (20483 voxels) in unprecedented detail. We determined position, volume, shape and orientation of each resolvable pore and tracked these properties over the duration of the experiment. We found that the pore-size distribution follows a power law. Pores tend to be anisotropic but rarely crack-shaped and have a preferred orientation, likely controlled by a pre-existing fabric in the sample. With on-going dehydration, pores coalesce into a single interconnected pore cluster that is connected to the surface of the sample cylinder and provides an effective drainage pathway. Our observations can be summarized in a model in which gypsum is stabilized by thermal expansion stresses and locally increased pore fluid pressures until the dehydration front approaches to within about 100 μm. Then, the internal stresses are released and dehydration happens efficiently, resulting in new pore space. Pressure release, the production of pores and the advance of the front are coupled in a feedback loop.
Contrast transfer function correction applied to cryo-electron tomography and sub-tomogram averaging
Resumo:
Cryo-electron tomography together with averaging of sub-tomograms containing identical particles can reveal the structure of proteins or protein complexes in their native environment. The resolution of this technique is limited by the contrast transfer function (CTF) of the microscope. The CTF is not routinely corrected in cryo-electron tomography because of difficulties including CTF detection, due to the low signal to noise ratio, and CTF correction, since images are characterised by a spatially variant CTF. Here we simulate the effects of the CTF on the resolution of the final reconstruction, before and after CTF correction, and consider the effect of errors and approximations in defocus determination. We show that errors in defocus determination are well tolerated when correcting a series of tomograms collected at a range of defocus values. We apply methods for determining the CTF parameters in low signal to noise images of tilted specimens, for monitoring defocus changes using observed magnification changes, and for correcting the CTF prior to reconstruction. Using bacteriophage PRDI as a test sample, we demonstrate that this approach gives an improvement in the structure obtained by sub-tomogram averaging from cryo-electron tomograms.
Resumo:
The multifractal properties of daily rainfall time series at the stations in Pearl River basin of China over periods of up to 45 years are examined using the universal multifractal approach based on the multiplicative cascade model and the multifractal detrended fluctuation analysis (MF-DFA). The results from these two kinds of multifractal analyses show that the daily rainfall time series in this basin have multifractal behavior in two different time scale ranges. It is found that the empirical multifractal moment function K(q)K(q) of the daily rainfall time series can be fitted very well by the universal multifractal model (UMM). The estimated values of the conservation parameter HH from UMM for these daily rainfall data are close to zero indicating that they correspond to conserved fields. After removing the seasonal trend in the rainfall data, the estimated values of the exponent h(2)h(2) from MF-DFA indicate that the daily rainfall time series in Pearl River basin exhibit no long-term correlations. It is also found that K(2)K(2) and elevation series are negatively correlated. It shows a relationship between topography and rainfall variability.
Resumo:
This correspondence considers the problem of optimally controlling the thrust steering angle of an ion-propelled spaceship so as to effect a minimum time coplanar orbit transfer from the mean orbital distance of Earth to mean Martian and Venusian orbital distances. This problem has been modelled as a free terminal time-optimal control problem with unbounded control variable and with state variable equality constraints at the final time. The problem has been solved by the penalty function approach, using the conjugate gradient algorithm. In general, the optimal solution shows a significant departure from earlier work. In particular, the optimal control in the case of Earth-Mars orbit transfer, during the initial phase of the spaceship's flight, is found to be negative, resulting in the motion of the spaceship within the Earth's orbit for a significant fraction of the total optimized orbit transfer time. Such a feature exhibited by the optimal solution has not been reported at all by earlier investigators of this problem.
Resumo:
In this paper we propose a novel family of kernels for multivariate time-series classification problems. Each time-series is approximated by a linear combination of piecewise polynomial functions in a Reproducing Kernel Hilbert Space by a novel kernel interpolation technique. Using the associated kernel function a large margin classification formulation is proposed which can discriminate between two classes. The formulation leads to kernels, between two multivariate time-series, which can be efficiently computed. The kernels have been successfully applied to writer independent handwritten character recognition.
Resumo:
Background: Depression and anxiety have been linked to serious cardiovascular events in patients with preexisting cardiac illness. A decrease in cardiac vagal function as suggested by a decrease in heart rate (HR) variability has been linked to sudden death. Methods: We compared LLE and nonlinearity scores of the unfiltered (UF) and filtered time series (very low, low, and high frequency; VLF, LF and HF) of HR between patients with depression (n = 14) and healthy control subjects (n = 18). Results: We found significantly lower LLE of the unfiltered series in either posture, and HF series in patients with major depression in supine posture (p < .002). LLE (LF/UF), which may indicate relative sympathetic activity was also significantly higher in supine and standing postures in patients (p < .05); LF/HF (LLE) was also higher in patients (p < .05) in either posture. Conclusions: These findings suggest that major depression is associated with decreased cardiac vagal function and a relative increase in sympathetic function, which may be related to the higher risk of cardiovascular mortality, in this group and illustrates the usefulness of nonlinear measures of chaos such as LLE in addition to the commonly used spectral measures.
Resumo:
Tricyclic antidepressants have notable cardiac side effects, and this issue has become important due to the recent reports of increased cardiovascular mortality in patients with depression and anxiety. Several previous studies indicate that serotonin reuptake inhibitors (SRIs) do not appear to have such adverse effects. Apart from the effects of these drugs on routine 12-lead ECG, the effects on beat-to-beat heart rate (HR) and QT interval time series provide more information on the side effects related to cardiac autonomic function. In this study, we evaluated the effects of two antidepressants, nortriptyline (n = 13), a tricyclic, and paroxetine (n = 16), an SRI inhibitor, on HR variability in patients with panic disorder, using a measure of chaos, the largest Lyapunov exponent (LLE) using pre- and posttreatment HR time series. Our results show that nortriptyline is associated with a decrease in LLE of high frequency (HF: 0.15-0.5 Hz) filtered series, which is most likely due to its anticholinergic effect, while paroxetine had no such effect. Paroxetine significantly decreased sympathovagal ratios as measured by a decrease in LLE of LF/HF. These results suggest that paroxetine appears to be safer in regards to cardiovascular effects compared to nortriptyline in this group of patients. (C) 2003 Elsevier Inc. All rights reserved.
Resumo:
Models of river flow time series are essential in efficient management of a river basin. It helps policy makers in developing efficient water utilization strategies to maximize the utility of scarce water resource. Time series analysis has been used extensively for modeling river flow data. The use of machine learning techniques such as support-vector regression and neural network models is gaining increasing popularity. In this paper we compare the performance of these techniques by applying it to a long-term time-series data of the inflows into the Krishnaraja Sagar reservoir (KRS) from three tributaries of the river Cauvery. In this study flow data over a period of 30 years from three different observation points established in upper Cauvery river sub-basin is analyzed to estimate their contribution to KRS. Specifically, ANN model uses a multi-layer feed forward network trained with a back-propagation algorithm and support vector regression with epsilon intensive-loss function is used. Auto-regressive moving average models are also applied to the same data. The performance of different techniques is compared using performance metrics such as root mean squared error (RMSE), correlation, normalized root mean squared error (NRMSE) and Nash-Sutcliffe Efficiency (NSE).
Resumo:
We propose a Monte Carlo filter for recursive estimation of diffusive processes that modulate the instantaneous rates of Poisson measurements. A key aspect is the additive update, through a gain-like correction term, empirically approximated from the innovation integral in the time-discretized Kushner-Stratonovich equation. The additive filter-update scheme eliminates the problem of particle collapse encountered in many conventional particle filters. Through a few numerical demonstrations, the versatility of the proposed filter is brought forth.
Resumo:
We demonstrate a parameter extraction algorithm based on a theoretical transfer function, which takes into account a converging THz beam. Using this, we successfully extract material parameters from data obtained for a quartz sample with a THz time domain spectrometer. © 2010 IEEE.
Resumo:
We present a method to integrate environmental time series into stock assessment models and to test the significance of correlations between population processes and the environmental time series. Parameters that relate the environmental time series to population processes are included in the stock assessment model, and likelihood ratio tests are used to determine if the parameters improve the fit to the data significantly. Two approaches are considered to integrate the environmental relationship. In the environmental model, the population dynamics process (e.g. recruitment) is proportional to the environmental variable, whereas in the environmental model with process error it is proportional to the environmental variable, but the model allows an additional temporal variation (process error) constrained by a log-normal distribution. The methods are tested by using simulation analysis and compared to the traditional method of correlating model estimates with environmental variables outside the estimation procedure. In the traditional method, the estimates of recruitment were provided by a model that allowed the recruitment only to have a temporal variation constrained by a log-normal distribution. We illustrate the methods by applying them to test the statistical significance of the correlation between sea-surface temperature (SST) and recruitment to the snapper (Pagrus auratus) stock in the Hauraki Gulf–Bay of Plenty, New Zealand. Simulation analyses indicated that the integrated approach with additional process error is superior to the traditional method of correlating model estimates with environmental variables outside the estimation procedure. The results suggest that, for the snapper stock, recruitment is positively correlated with SST at the time of spawning.
Resumo:
Particle flux in the ocean reflects ongoing biological and geological processes operating under the influence of the local environment. Estimation of this particle flux through sediment trap deployment is constrained by sampler accuracy, particle preservation, and swimmer distortion. Interpretation of specific particle flux is further constrained by indeterminate particle dispersion and the absence of a clear understanding of the sedimentary consequences of ecosystem activity. Nevertheless, the continuous and integrative properties of the particle trap measure, along with the logistic advantage of a long-term moored sampler, provide a set of strategic advantages that appear analogous to those underlying conventional oceanographic survey programs. Emboldened by this perception, several stations along the coast of Southern California and Mexico have been targeted as coastal ocean flux sites (COFS).
Resumo:
In this paper we study parameter estimation for time series with asymmetric α-stable innovations. The proposed methods use a Poisson sum series representation (PSSR) for the asymmetric α-stable noise to express the process in a conditionally Gaussian framework. That allows us to implement Bayesian parameter estimation using Markov chain Monte Carlo (MCMC) methods. We further enhance the series representation by introducing a novel approximation of the series residual terms in which we are able to characterise the mean and variance of the approximation. Simulations illustrate the proposed framework applied to linear time series, estimating the model parameter values and model order P for an autoregressive (AR(P)) model driven by asymmetric α-stable innovations. © 2012 IEEE.
Resumo:
We introduce a dynamic directional model (DDM) for studying brain effective connectivity based on intracranial electrocorticographic (ECoG) time series. The DDM consists of two parts: a set of differential equations describing neuronal activity of brain components (state equations), and observation equations linking the underlying neuronal states to observed data. When applied to functional MRI or EEG data, DDMs usually have complex formulations and thus can accommodate only a few regions, due to limitations in spatial resolution and/or temporal resolution of these imaging modalities. In contrast, we formulate our model in the context of ECoG data. The combined high temporal and spatial resolution of ECoG data result in a much simpler DDM, allowing investigation of complex connections between many regions. To identify functionally segregated sub-networks, a form of biologically economical brain networks, we propose the Potts model for the DDM parameters. The neuronal states of brain components are represented by cubic spline bases and the parameters are estimated by minimizing a log-likelihood criterion that combines the state and observation equations. The Potts model is converted to the Potts penalty in the penalized regression approach to achieve sparsity in parameter estimation, for which a fast iterative algorithm is developed. The methods are applied to an auditory ECoG dataset.
Resumo:
Based on an algorithm for pattern matching in character strings, we implement a pattern matching machine that searches for occurrences of patterns in multidimensional time series. Before the search process takes place, time series are encoded in user-designed alphabets. The patterns, on the other hand, are formulated as regular expressions that are composed of letters from these alphabets and operators. Furthermore, we develop a genetic algorithm to breed patterns that maximize a user-defined fitness function. In an application to financial data, we show that patterns bred to predict high exchange rates volatility in training samples retain statistically significant predictive power in validation samples.