970 resultados para Mathematical methods


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Low-micromolar concentrations of sulfite, thiosulfate and sulfide, present in synthetic wastewater or anaerobic digester effluent, were quantified by means of derivatization with monobromobimane, followed by HPLC separation with fluorescence detection. The concentration of elemental sulfur was determined, after its extraction with chloroform from the derivatized sample, by HPLC with UV detection. Recoveries of sulfide (both matrices), and of thiosulfate and sulfite (synthetic wastewater) were between 98 and 103%. The in-run RSDs on separate derivatizations were 13 and 19% for sulfite (two tests), between 1.5 and 6.6% for thiosulfate (two tests) and between 4.1 and 7.7% for sulfide (three tests). Response factors for derivatives of sulfide and thiosulfate, but not sulfite, were steady over a 13-month period during which 730 samples were analysed. Dithionate and tetrathionate did not seem to be detectable with this method. The distinctness of the elemental sulfur and the derivatizing-agent peaks was improved considerably by detecting elution at 297 instead of 263 nm. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This note gives a theory of state transition matrices for linear systems of fuzzy differential equations. This is used to give a fuzzy version of the classical variation of constants formula. A simple example of a time-independent control system is used to illustrate the methods. While similar problems to the crisp case arise for time-dependent systems, in time-independent cases the calculations are elementary solutions of eigenvalue-eigenvector problems. In particular, for nonnegative or nonpositive matrices, the problems at each level set, can easily be solved in MATLAB to give the level sets of the fuzzy solution. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives: This study examines human scalp electroencephalographic (EEG) data for evidence of non-linear interdependence between posterior channels. The spectral and phase properties of those epochs of EEG exhibiting non-linear interdependence are studied. Methods: Scalp EEG data was collected from 40 healthy subjects. A technique for the detection of non-linear interdependence was applied to 2.048 s segments of posterior bipolar electrode data. Amplitude-adjusted phase-randomized surrogate data was used to statistically determine which EEG epochs exhibited non-linear interdependence. Results: Statistically significant evidence of non-linear interactions were evident in 2.9% (eyes open) to 4.8% (eyes closed) of the epochs. In the eyes-open recordings, these epochs exhibited a peak in the spectral and cross-spectral density functions at about 10 Hz. Two types of EEG epochs are evident in the eyes-closed recordings; one type exhibits a peak in the spectral density and cross-spectrum at 8 Hz. The other type has increased spectral and cross-spectral power across faster frequencies. Epochs identified as exhibiting non-linear interdependence display a tendency towards phase interdependencies across and between a broad range of frequencies. Conclusions: Non-linear interdependence is detectable in a small number of multichannel EEG epochs, and makes a contribution to the alpha rhythm. Non-linear interdependence produces spatially distributed activity that exhibits phase synchronization between oscillations present at different frequencies. The possible physiological significance of these findings are discussed with reference to the dynamical properties of neural systems and the role of synchronous activity in the neocortex. (C) 2002 Elsevier Science Ireland Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Binning and truncation of data are common in data analysis and machine learning. This paper addresses the problem of fitting mixture densities to multivariate binned and truncated data. The EM approach proposed by McLachlan and Jones (Biometrics, 44: 2, 571-578, 1988) for the univariate case is generalized to multivariate measurements. The multivariate solution requires the evaluation of multidimensional integrals over each bin at each iteration of the EM procedure. Naive implementation of the procedure can lead to computationally inefficient results. To reduce the computational cost a number of straightforward numerical techniques are proposed. Results on simulated data indicate that the proposed methods can achieve significant computational gains with no loss in the accuracy of the final parameter estimates. Furthermore, experimental results suggest that with a sufficient number of bins and data points it is possible to estimate the true underlying density almost as well as if the data were not binned. The paper concludes with a brief description of an application of this approach to diagnosis of iron deficiency anemia, in the context of binned and truncated bivariate measurements of volume and hemoglobin concentration from an individual's red blood cells.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Formulations of fuzzy integral equations in terms of the Aumann integral do not reflect the behavior of corresponding crisp models. Consequently, they are ill-adapted to describe physical phenomena, even when vagueness and uncertainty are present. A similar situation for fuzzy ODEs has been obviated by interpretation in terms of families of differential inclusions. The paper extends this formalism to fuzzy integral equations and shows that the resulting solution sets and attainability sets are fuzzy and far better descriptions of uncertain models involving integral equations. The investigation is restricted to Volterra type equations with mildly restrictive conditions, but the methods are capable of extensive generalization to other types and more general assumptions. The results are illustrated by integral equations relating to control models with fuzzy uncertainties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motivation: This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. Results: The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motivation: A consensus sequence for a family of related sequences is, as the name suggests, a sequence that captures the features common to most members of the family. Consensus sequences are important in various DNA sequencing applications and are a convenient way to characterize a family of molecules. Results: This paper describes a new algorithm for finding a consensus sequence, using the popular optimization method known as simulated annealing. Unlike the conventional approach of finding a consensus sequence by first forming a multiple sequence alignment, this algorithm searches for a sequence that minimises the sum of pairwise distances to each of the input sequences. The resulting consensus sequence can then be used to induce a multiple sequence alignment. The time required by the algorithm scales linearly with the number of input sequences and quadratically with the length of the consensus sequence. We present results demonstrating the high quality of the consensus sequences and alignments produced by the new algorithm. For comparison, we also present similar results obtained using ClustalW. The new algorithm outperforms ClustalW in many cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The two-node tandem Jackson network serves as a convenient reference model for the analysis and testing of different methodologies and techniques in rare event simulation. In this paper we consider a new approach to efficiently estimate the probability that the content of the second buffer exceeds some high level L before it becomes empty, starting from a given state. The approach is based on a Markov additive process representation of the buffer processes, leading to an exponential change of measure to be used in an importance sampling procedure. Unlike changes of measures proposed and studied in recent literature, the one derived here is a function of the content of the first buffer. We prove that when the first buffer is finite, this method yields asymptotically efficient simulation for any set of arrival and service rates. In fact, the relative error is bounded independent of the level L; a new result which is not established for any other known method. When the first buffer is infinite, we propose a natural extension of the exponential change of measure for the finite buffer case. In this case, the relative error is shown to be bounded (independent of L) only when the second server is the bottleneck; a result which is known to hold for some other methods derived through large deviations analysis. When the first server is the bottleneck, experimental results using our method seem to suggest that the relative error is bounded linearly in L.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We reinterpret the state space dimension equations for geometric Goppa codes. An easy consequence is that if deg G less than or equal to n-2/2 or deg G greater than or equal to n-2/2 + 2g then the state complexity of C-L(D, G) is equal to the Wolf bound. For deg G is an element of [n-1/2, n-3/2 + 2g], we use Clifford's theorem to give a simple lower bound on the state complexity of C-L(D, G). We then derive two further lower bounds on the state space dimensions of C-L(D, G) in terms of the gonality sequence of F/F-q. (The gonality sequence is known for many of the function fields of interest for defining geometric Goppa codes.) One of the gonality bounds uses previous results on the generalised weight hierarchy of C-L(D, G) and one follows in a straightforward way from first principles; often they are equal. For Hermitian codes both gonality bounds are equal to the DLP lower bound on state space dimensions. We conclude by using these results to calculate the DLP lower bound on state complexity for Hermitian codes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the continuous problem y"=f(x,y,y'), xc[0,1], 0=G((y(0),y(1)),(y'(0), y'(1))), and its discrete approximation (y(k+1)-2y(k)+y(k-1))/h(2) =f(t(k), y(k), v(k)), k = 1,..., n-1, 0 = G((y(0), y(n)), (v(1), v(n))), where f and G = (g(0), g(1)) are continuous and fully nonlinear, h = 1/n, v(k) = (y(k) - y(k-1))/h, for k =1,..., n, and t(k) = kh, for k = 0,...,n. We assume there exist strict lower and strict upper solutions and impose additional conditions on f and G which are known to yield a priori bounds on, and to guarantee the existence of solutions of the continuous problem. We show that the discrete approximation also has solutions which approximate solutions of the continuous problem and converge to the solution of the continuous problem when it is unique, as the grid size goes to 0. Homotopy methods can be used to compute the solution of the discrete approximation. Our results were motivated by those of Gaines.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a novel maximum-likelihood-based algorithm for estimating the distribution of alignment scores from the scores of unrelated sequences in a database search. Using a new method for measuring the accuracy of p-values, we show that our maximum-likelihood-based algorithm is more accurate than existing regression-based and lookup table methods. We explore a more sophisticated way of modeling and estimating the score distributions (using a two-component mixture model and expectation maximization), but conclude that this does not improve significantly over simply ignoring scores with small E-values during estimation. Finally, we measure the classification accuracy of p-values estimated in different ways and observe that inaccurate p-values can, somewhat paradoxically, lead to higher classification accuracy. We explain this paradox and argue that statistical accuracy, not classification accuracy, should be the primary criterion in comparisons of similarity search methods that return p-values that adjust for target sequence length.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Control of chaotic instability in a rotating multibody system in the form of a dual-spin spacecraft with an axial nutational damper is achieved using an algorithm derived using energy methods. The control method is implemented on two realistic spacecraft parameter configurations which have been found to exhibit chaotic instability when a sinusoidally varying torque is applied to the spacecraft for a range of forcing amplitudes and frequencies. Such a torque, in practice, may arise under malfunction of the control system or from an unbalanced rotor. Chaotic instabilities arising from these torques could introduce uncertainties and irregularities into a spacecraft's attitude and consequently impair pointing accuracy. The control method is formulated from nutational stability results derived using an energy sink approximation for a dual-spin spacecraft with an asymmetric platform and axisymmetric rotor. The effectiveness of the control method is shown numerically and the results are studied by means of time history, phase space, Poincare map, Lyapunov characteristic exponents and Bifurcation diagrams.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Improvements to peroxide oxidation methods for analysing acid sulfate soils (ASS) are introduced. The soil solution ratio has been increased to 1 : 40, titrations are performed in suspension, and the duration of the peroxide digest stage is substantially shortened. For 9 acid sulfate soils, the peroxide oxidisable sulfur value obtained using the improved method was compared with the reduced inorganic sulfur result obtained using the chromium reducible sulfur method. Their regression was highly significant, the slope of the regression line was not significantly different (P = 0.05) from unity, and the intercept not significantly different from zero. A complete sulfur budget for the improved method showed there was no loss of sulfur as has been reported for earlier peroxide oxidation techniques. When soils were very finely ground, efficient oxidation of sulfides was achieved, despite the milder digestion conditions. Highly sulfidic and organic soils were shown to be the most difficult to analyse using either the improved method or the chromium method. No single analytical method can be universally applied to all ASS, rather a suite of methods is necessary for a thorough understanding of many ASS. The improved peroxide method, in combination with the chromium method and the 4 M HCl extraction, form a sound platform for informed decision making on the management of acid sulfate soils.