71 resultados para Regression-based decomposition.


Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper introduces a procedure for filtering electromyographic (EMG) signals. Its key element is the Empirical Mode Decomposition, a novel digital signal processing technique that can decompose my time-series into a set of functions designated as intrinsic mode functions. The procedure for EMG signal filtering is compared to a related approach based on the wavelet transform. Results obtained from the analysis of synthetic and experimental EMG signals show that Our method can be Successfully and easily applied in practice to attenuation of background activity in EMG signals. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this letter, a Box-Cox transformation-based radial basis function (RBF) neural network is introduced using the RBF neural network to represent the transformed system output. Initially a fixed and moderate sized RBF model base is derived based on a rank revealing orthogonal matrix triangularization (QR decomposition). Then a new fast identification algorithm is introduced using Gauss-Newton algorithm to derive the required Box-Cox transformation, based on a maximum likelihood estimator. The main contribution of this letter is to explore the special structure of the proposed RBF neural network for computational efficiency by utilizing the inverse of matrix block decomposition lemma. Finally, the Box-Cox transformation-based RBF neural network, with good generalization and sparsity, is identified based on the derived optimal Box-Cox transformation and a D-optimality-based orthogonal forward regression algorithm. The proposed algorithm and its efficacy are demonstrated with an illustrative example in comparison with support vector machine regression.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Transient neural assemblies mediated by synchrony in particular frequency ranges are thought to underlie cognition. We propose a new approach to their detection, using empirical mode decomposition (EMD), a data-driven approach removing the need for arbitrary bandpass filter cut-offs. Phase locking is sought between modes. We explore the features of EMD, including making a quantitative assessment of its ability to preserve phase content of signals, and proceed to develop a statistical framework with which to assess synchrony episodes. Furthermore, we propose a new approach to ensure signal decomposition using EMD. We adapt the Hilbert spectrum to a time-frequency representation of phase locking and are able to locate synchrony successfully in time and frequency between synthetic signals reminiscent of EEG. We compare our approach, which we call EMD phase locking analysis (EMDPL) with existing methods and show it to offer improved time-frequency localisation of synchrony.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A new parameter-estimation algorithm, which minimises the cross-validated prediction error for linear-in-the-parameter models, is proposed, based on stacked regression and an evolutionary algorithm. It is initially shown that cross-validation is very important for prediction in linear-in-the-parameter models using a criterion called the mean dispersion error (MDE). Stacked regression, which can be regarded as a sophisticated type of cross-validation, is then introduced based on an evolutionary algorithm, to produce a new parameter-estimation algorithm, which preserves the parsimony of a concise model structure that is determined using the forward orthogonal least-squares (OLS) algorithm. The PRESS prediction errors are used for cross-validation, and the sunspot and Canadian lynx time series are used to demonstrate the new algorithms.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The modelling of a nonlinear stochastic dynamical processes from data involves solving the problems of data gathering, preprocessing, model architecture selection, learning or adaptation, parametric evaluation and model validation. For a given model architecture such as associative memory networks, a common problem in non-linear modelling is the problem of "the curse of dimensionality". A series of complementary data based constructive identification schemes, mainly based on but not limited to an operating point dependent fuzzy models, are introduced in this paper with the aim to overcome the curse of dimensionality. These include (i) a mixture of experts algorithm based on a forward constrained regression algorithm; (ii) an inherent parsimonious delaunay input space partition based piecewise local lineal modelling concept; (iii) a neurofuzzy model constructive approach based on forward orthogonal least squares and optimal experimental design and finally (iv) the neurofuzzy model construction algorithm based on basis functions that are Bézier Bernstein polynomial functions and the additive decomposition. Illustrative examples demonstrate their applicability, showing that the final major hurdle in data based modelling has almost been removed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Various methods of assessment have been applied to the One Dimensional Time to Explosion (ODTX) apparatus and experiments with the aim of allowing an estimate of the comparative violence of the explosion event to be made. Non-mechanical methods used were a simple visual inspection, measuring the increase in the void volume of the anvils following an explosion and measuring the velocity of the sound produced by the explosion over 1 metre. Mechanical methods used included monitoring piezo-electric devices inserted in the frame of the machine and measuring the rotational velocity of a rotating bar placed on the top of the anvils after it had been displaced by the shock wave. This last method, which resembles original Hopkinson Bar experiments, seemed the easiest to apply and analyse, giving relative rankings of violence and the possibility of the calculation of a “detonation” pressure.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A One-Dimensional Time to Explosion (ODTX) apparatus has been used to study the times to explosion of a number of compositions based on RDX and HMX over a range of contact temperatures. The times to explosion at any given temperature tend to increase from RDX to HMX and with the proportion of HMX in the composition. Thermal ignition theory has been applied to time to explosion data to calculate kinetic parameters. The apparent activation energy for all of the compositions lay between 127 kJ mol−1 and 146 kJ mol−1. There were big differences in the pre-exponential factor and this controlled the time to explosion rather than the activation energy for the process.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

An efficient two-level model identification method aiming at maximising a model׳s generalisation capability is proposed for a large class of linear-in-the-parameters models from the observational data. A new elastic net orthogonal forward regression (ENOFR) algorithm is employed at the lower level to carry out simultaneous model selection and elastic net parameter estimation. The two regularisation parameters in the elastic net are optimised using a particle swarm optimisation (PSO) algorithm at the upper level by minimising the leave one out (LOO) mean square error (LOOMSE). There are two elements of original contributions. Firstly an elastic net cost function is defined and applied based on orthogonal decomposition, which facilitates the automatic model structure selection process with no need of using a predetermined error tolerance to terminate the forward selection process. Secondly it is shown that the LOOMSE based on the resultant ENOFR models can be analytically computed without actually splitting the data set, and the associate computation cost is small due to the ENOFR procedure. Consequently a fully automated procedure is achieved without resort to any other validation data set for iterative model evaluation. Illustrative examples are included to demonstrate the effectiveness of the new approaches.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper discusses ECG classification after parametrizing the ECG waveforms in the wavelet domain. The aim of the work is to develop an accurate classification algorithm that can be used to diagnose cardiac beat abnormalities detected using a mobile platform such as smart-phones. Continuous time recurrent neural network classifiers are considered for this task. Records from the European ST-T Database are decomposed in the wavelet domain using discrete wavelet transform (DWT) filter banks and the resulting DWT coefficients are filtered and used as inputs for training the neural network classifier. Advantages of the proposed methodology are the reduced memory requirement for the signals which is of relevance to mobile applications as well as an improvement in the ability of the neural network in its generalization ability due to the more parsimonious representation of the signal to its inputs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The study of decaying organisms and death assemblages is referred to as forensic taphonomy, or more simply the study of graves. This field is dominated by the fields of entomology, anthropology and archaeology. Forensic taphonomy also includes the study of the ecology and chemistry of the burial environment. Studies in forensic taphonomy often require the use of analogues for human cadavers or their component parts. These might include animal cadavers or skeletal muscle tissue. However, sufficient supplies of cadavers or analogues may require periodic freezing of test material prior to experimental inhumation in the soil. This study was carried out to ascertain the effect of freezing on skeletal muscle tissue prior to inhumation and decomposition in a soil environment under controlled laboratory conditions. Changes in soil chemistry were also measured. In order to test the impact of freezing, skeletal muscle tissue (Sus scrofa) was frozen (−20 °C) or refrigerated (4 °C). Portions of skeletal muscle tissue (∼1.5 g) were interred in microcosms (72 mm diameter × 120 mm height) containing sieved (2 mm) soil (sand) adjusted to 50% water holding capacity. The experiment had three treatments: control with no skeletal muscle tissue, microcosms containing frozen skeletal muscle tissue and those containing refrigerated tissue. The microcosms were destructively harvested at sequential periods of 2, 4, 6, 8, 12, 16, 23, 30 and 37 days after interment of skeletal muscle tissue. These harvests were replicated 6 times for each treatment. Microbial activity (carbon dioxide respiration) was monitored throughout the experiment. At harvest the skeletal muscle tissue was removed and the detritosphere soil was sampled for chemical analysis. Freezing was found to have no significant impact on decomposition or soil chemistry compared to unfrozen samples in the current study using skeletal muscle tissue. However, the interment of skeletal muscle tissue had a significant impact on the microbial activity (carbon dioxide respiration) and chemistry of the surrounding soil including: pH, electroconductivity, ammonium, nitrate, phosphate and potassium. This is the first laboratory controlled study to measure changes in inorganic chemistry in soil associated with the decomposition of skeletal muscle tissue in combination with microbial activity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, Bayesian decision procedures are developed for dose-escalation studies based on bivariate observations of undesirable events and signs of therapeutic benefit. The methods generalize earlier approaches taking into account only the undesirable outcomes. Logistic regression models are used to model the two responses, which are both assumed to take a binary form. A prior distribution for the unknown model parameters is suggested and an optional safety constraint can be included. Gain functions to be maximized are formulated in terms of accurate estimation of the limits of a therapeutic window or optimal treatment of the next cohort of subjects, although the approach could be applied to achieve any of a wide variety of objectives. The designs introduced are illustrated through simulation and retrospective implementation to a completed dose-escalation study. Copyright © 2006 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, various approaches have been suggested for dose escalation studies based on observations of both undesirable events and evidence of therapeutic benefit. This article concerns a Bayesian approach to dose escalation that requires the user to make numerous design decisions relating to the number of doses to make available, the choice of the prior distribution, the imposition of safety constraints and stopping rules, and the criteria by which the design is to be optimized. Results are presented of a substantial simulation study conducted to investigate the influence of some of these factors on the safety and the accuracy of the procedure with a view toward providing general guidance for investigators conducting such studies. The Bayesian procedures evaluated use logistic regression to model the two responses, which are both assumed to be binary. The simulation study is based on features of a recently completed study of a compound with potential benefit to patients suffering from inflammatory diseases of the lung.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The terpenoid chiral selectors dehydroabietic acid, 12,14-dinitrodehydroabietic acid and friedelin have been covalently linked to silica gel yielding three chiral stationary phases CSP 1, CSP 2 and CSP 3, respectively. The enantiodiscriminating capability of each one of these phases was evaluated by HPLC with four families of chiral aromatic compounds composed of alcohols, amines, phenylalanine and tryptophan amino acid derivatives and beta-lactams. The CSP 3 phase, containing a selector with a large friedelane backbone is particularly suitable for resolving free alcohols and their derivatives bearing fluorine substituents, while CSP 2 with a dehydroabietic architecture is the only phase that efficiently discriminates 1, 1'-binaphthol atropisomers. CSP 3 also gives efficient resolution of the free amines. All three phases resolve well the racemates of N-trifluoracetyl and N-3,5-dinitrobenzoyl phenylalanine amino acid ester derivatives. Good enantioseparation of beta-lactams and N-benzoyl tryptophan amino acid derivatives was achieved on CSP 1. In order to understand the structural factors that govern the chiral molecular recognition ability of these phases, molecular dynamics simulations were carried out in the gas phase with binary diastereomeric complexes formed by the selectors of CSP 1 and CSP 2 and several amino acid derivatives. Decomposition of molecular mechanics energies shows that van der Waals interactions dominate the formation of the diastereomeric transient complexes while the electrostatic binding interactions are primarily responsible for the enantioselective binding of the (R)- and (S)-analytes. Analysis of the hydrogen bonds shows that electrostatic interactions are mainly associated with the formation of N-(HO)-O-...=C enantio selective hydrogen bonds between the amide binding sites from the selectors and the carbonyl groups of the analytes. The role of mobile phase polarity, a mixture of n-hexane and propan-2-ol in different ratios, was also evaluated through molecular dynamics simulations in explicit solvent. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper compares the volatile compound and fatty acid compositions of grilled beef from Aberdeen Angus and Holstein-Friesian steers slaughtered at 14 months, each breed fed from 6 months on either cereal-based concentrates or grass silage. Linoleic acid levels were higher in the muscle of concentrates-fed animals, which in the cooked meat resulted in increased levels of several compounds formed from linoleic acid decomposition. Levels of alpha-linolenic acid, and hence some volatile compounds derived from this fatty acid, were higher in the meat from the silage-fed steers. 1-Octen-3-ol, hexanal, 2-pentylfuran, trimethylamine, cis- and trans-2-octene and 4,5-dimethyl-2-pentyl-3-oxazoline were over 3 times higher in the steaks from the concentrates-fed steers, while grass-derived 1-phytene was present at much higher levels in the beef from the silage-fed steers. Only slight effects of breed were observed. (C) 2004 Elsevier Ltd. All rights reserved.