894 resultados para Least mean squares methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Principles and guidelines are presented to ensure a solid scientific standard of papers dealing with the taxonomy of taxa of Pasteurellaceae Pohl 1981. The classification of the Pasteurellaceae is in principle based on a polyphasic approach. DNA sequencing of certain genes is very important for defining the borders of a taxon. However, the characteristics that are common to all members of the taxon and which might be helpful for separating it from related taxa must also be identified. Descriptions have to be based on as many strains as possible (inclusion of at least five strains is highly desirable), representing different sources with respect to geography and ecology, to allow proper characterization both phenotypically and genotypically, to establish the extent of diversity of the cluster to be named. A genus must be monophyletic based on 16S rRNA gene sequence-based phylogenetic analysis. Only in very rare cases is it acceptable that monophyly can not be achieved by 16S rRNA gene sequence comparison. Recently, the monophyly of genera has been confirmed by sequence comparison of housekeeping genes. In principle, a new genus should be recognized by a distinct phenotype, and characters that separate the new genus from its neighbours should be given clearly. Due to the overall importance of accurate classification of species, at least two genotypic methods are needed to show coherence and for separation at the species level. The main criterion for the classification of a novel species is that it forms a monophyletic group based on 16S rRNA gene sequence-based phylogenetic analysis. However, some groups might also include closely related species. In these cases, more sensitive tools for genetic recognition of species should be applied, such as DNA-DNA hybridizations. The comparison of housekeeping gene sequences has recently been used for genotypic definition of species. In order to separate species, phenotypic characters must also be identified to recognize them, and at least two phenotypic differences from existing species should be identified if possible. We recommend the use of the subspecies category only for subgroups associated with disease or similar biological characteristics. At the subspecies level, the genotypic groups must always be nested within the boundaries of an existing species. Phenotypic cohesion must be documented at the subspecies level and separation between subspecies and related species must be fully documented, as well as association with particular disease and host. An overview of methods previously used to characterize isolates of the Pasteurellaceae has been given. Genotypic and phenotypic methods are separated in relation to tests for investigating diversity and cohesion and to separate taxa at the level of genus as well as species and subspecies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

robreg provides a number of robust estimators for linear regression models. Among them are the high breakdown-point and high efficiency MM-estimator, the Huber and bisquare M-estimator, and the S-estimator, each supporting classic or robust standard errors. Furthermore, basic versions of the LMS/LQS (least median of squares) and LTS (least trimmed squares) estimators are provided. Note that the moremata package, also available from SSC, is required.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

New trace element analyses are presented for Leg 180 dolerites, basalts from the Papuan Ultramafic Belt (PUB), and basement rocks of Woodlark Island. The Leg 180 dolerites are similar to those from Woodlark Island in being derived from an enriched source but differ from the PUB, which came from a source similar to normal mid-ocean ridge basalts. A reliable 40Ar/39Ar age of 54.0 ± 1.0 Ma has been obtained by step heating of a whole-rock sample from Site 1109, and a similar but less reliable age was obtained for a sample from Site 1118. Plagioclase from Site 1109 did not give a meaningful age. This age is broadly similar to ages from the Dabi volcanics of the nearby Cape Vogel and for the PUB.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper surveys some of the fundamental problems in natural language (NL) understanding (syntax, semantics, pragmatics, and discourse) and the current approaches to solving them. Some recent developments in NL processing include increased emphasis on corpus-based rather than example- or intuition-based work, attempts to measure the coverage and effectiveness of NL systems, dealing with discourse and dialogue phenomena, and attempts to use both analytic and stochastic knowledge. Critical areas for the future include grammars that are appropriate to processing large amounts of real language; automatic (or at least semi-automatic) methods for deriving models of syntax, semantics, and pragmatics; self-adapting systems; and integration with speech processing. Of particular importance are techniques that can be tuned to such requirements as full versus partial understanding and spoken language versus text. Portability (the ease with which one can configure an NL system for a particular application) is one of the largest barriers to application of this technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A apoptose constitui um processo fisiológico de morte celular, caracterizado por alterações morfológicas distintas e mecanismos bioquímicos e moleculares bem definidos. O seu papel de destaque em numerosos eventos biológicos e importantes processos patológicos conduziu a um crescente interesse na investigação dos mecanismos celulares que regulam o processo apoptótico. A aplicação de metodologias capazes de identificar células apoptóticas despoletou um enorme desenvolvimento de técnicas. No entanto, as propriedades demonstradas por estes ensaios nem sempre se aplicam ao estudo de amostras tecidulares, pelo que a escolha dos diferentes métodos deverá ser criteriosamente avaliada, tendo em conta a aplicação pretendida e as alterações morfológicas que se pretendem detetar. Das várias técnicas disponíveis para deteção da apoptose em tecidos, muitos investigadores recomendam o método TUNEL, o qual se baseia na marcação de produtos endonucleossómicos resultantes da fragmentação do DNA. Outros métodos histoquímicos também disponíveis incluem a deteção do citocromo c, libertado da mitocôndria ou a deteção das proteínas pró e anti-apoptóticas, Bax, Bidm e Bcl-2, envolvidas nos mecanismos intrínsecos da apoptose. Mais recentemente, a marcação de produtos específicos resultantes da clivagem de proteínas alvo pelas caspases, tem vindo a ser considerada uma abordagem promissora. Como principal objectivo deste trabalho pretendeu-se avaliar a técnica imunohistoquímica como método de deteção da apoptose a nível celular, em tecidos animais, tendo por base o método TUNEL, o qual permite a deteção de fragmentos de DNA. Os resultados obtidos permitiram concluir que, apesar do método TUNEL possuir as suas limitações ao nível da sensibilidade e especificidade, o mesmo constitui um mecanismo imunohistoquímico útil na deteção de células apoptóticas. Contudo, segundo opinião de vários autores, adverte-se para a necessidade da aplicação de pelo menos dois métodos imunohistoquímicos como forma de validar a ocorrência do processo apoptótico, razão pela qual se optou pela deteção do citocromo c citosólico como método complementar, uma vez que a sua libertação para o espaço citosólico se encontra implicada na ativação da apoptose.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We employ the methods presented in the previous chapter for decoding corrupted codewords, encoded using sparse parity check error correcting codes. We show the similarity between the equations derived from the TAP approach and those obtained from belief propagation, and examine their performance as practical decoding methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Thouless-Anderson-Palmer (TAP) approach was originally developed for analysing the Sherrington-Kirkpatrick model in the study of spin glass models and has been employed since then mainly in the context of extensively connected systems whereby each dynamical variable interacts weakly with the others. Recently, we extended this method for handling general intensively connected systems where each variable has only O(1) connections characterised by strong couplings. However, the new formulation looks quite different with respect to existing analyses and it is only natural to question whether it actually reproduces known results for systems of extensive connectivity. In this chapter, we apply our formulation of the TAP approach to an extensively connected system, the Hopfield associative memory model, showing that it produces identical results to those obtained by the conventional formulation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose an artificial neural network (ANN) equalizer for transmission performance enhancement of coherent optical OFDM (C-OOFDM) signals. The ANN equalizer showed more efficiency in combating both chromatic dispersion (CD) and single-mode fibre (SMF)-induced non-linearities compared to the least mean square (LMS). The equalizer can offer a 1.5 dB improvement in optical signal-to-noise ratio (OSNR) compared to LMS algorithm for 40 Gbit/s C-OOFDM signals when considering only CD. It is also revealed that ANN can double the transmission distance up to 320 km of SMF compared to the case of LMS, providing a nonlinearity tolerance improvement of ∼0.7 dB OSNR.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a Wiener-Hammerstein (W-H) channel estimation algorithm for Long-Term Evolution (LTE) systems. The LTE standard provides known data as pilot symbols and exploits them through coherent detection to improve system performance. These drivers are placed in a hybrid way to cover up both time and frequency domain. Our aim is to adapt the W-H equalizer (W-H/E) to LTE standard for compensation of both linear and nonlinear effects induced by power amplifiers and multipath channels. We evaluate the performance of the W-H/E for a Downlink LTE system in terms of BLER, EVM and Throughput versus SNR. Afterwards, we compare the results with a traditional Least-Mean Square (LMS) equalizer. It is shown that W-H/E can significantly reduce both linear and nonlinear distortions compared to LMS and improve LTE Downlink system performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes the development of an adaptive control algorithm for Computerized Numerical Control (CNC) machines implemented in a multi-axis motion control board based on the TMS320C31 DSP chip. The adaptive process involves two stages: Plant Modeling and Inverse Control Application. The first stage builds a non-recursive model of the CNC system (plant) using the Least-Mean-Square (LMS) algorithm. The second stage consists of the definition of a recursive structure (the controller) that implements an inverse model of the plant by using the coefficients of the model in an algorithm called Forward-Time Calculation (FTC). In this way, when the inverse controller is implemented in series with the plant, it will pre-compensate for the modification that the original plant introduces in the input signal. The performance of this solution was verified at three different levels: Software simulation, implementation in a set of isolated motor-encoder pairs and implementation in a real CNC machine. The use of the adaptive inverse controller effectively improved the step response of the system in all three levels. In the simulation, an ideal response was obtained. In the motor-encoder test, the rise time was reduced by as much as 80%, without overshoot, in some cases. Even with the larger mass of the actual CNC machine, decrease of the rise time and elimination of the overshoot were obtained in most cases. These results lead to the conclusion that the adaptive inverse controller is a viable approach to position control in CNC machinery.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Negative-ion mode electrospray ionization, ESI(-), with Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) was coupled to a Partial Least Squares (PLS) regression and variable selection methods to estimate the total acid number (TAN) of Brazilian crude oil samples. Generally, ESI(-)-FT-ICR mass spectra present a power of resolution of ca. 500,000 and a mass accuracy less than 1 ppm, producing a data matrix containing over 5700 variables per sample. These variables correspond to heteroatom-containing species detected as deprotonated molecules, [M - H](-) ions, which are identified primarily as naphthenic acids, phenols and carbazole analog species. The TAN values for all samples ranged from 0.06 to 3.61 mg of KOH g(-1). To facilitate the spectral interpretation, three methods of variable selection were studied: variable importance in the projection (VIP), interval partial least squares (iPLS) and elimination of uninformative variables (UVE). The UVE method seems to be more appropriate for selecting important variables, reducing the dimension of the variables to 183 and producing a root mean square error of prediction of 0.32 mg of KOH g(-1). By reducing the size of the data, it was possible to relate the selected variables with their corresponding molecular formulas, thus identifying the main chemical species responsible for the TAN values.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The present study focuses on single-case data analysis and specifically on two procedures for quantifying differences between baseline and treatment measurements The first technique tested is based on generalized least squares regression analysis and is compared to a proposed non-regression technique, which allows obtaining similar information. The comparison is carried out in the context of generated data representing a variety of patterns (i.e., independent measurements, different serial dependence underlying processes, constant or phase-specific autocorrelation and data variability, different types of trend, and slope and level change). The results suggest that the two techniques perform adequately for a wide range of conditions and researchers can use both of them with certain guarantees. The regression-based procedure offers more efficient estimates, whereas the proposed non-regression procedure is more sensitive to intervention effects. Considering current and previous findings, some tentative recommendations are offered to applied researchers in order to help choosing among the plurality of single-case data analysis techniques.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Gauss–Newton algorithm is an iterative method regularly used for solving nonlinear least squares problems. It is particularly well suited to the treatment of very large scale variational data assimilation problems that arise in atmosphere and ocean forecasting. The procedure consists of a sequence of linear least squares approximations to the nonlinear problem, each of which is solved by an “inner” direct or iterative process. In comparison with Newton’s method and its variants, the algorithm is attractive because it does not require the evaluation of second-order derivatives in the Hessian of the objective function. In practice the exact Gauss–Newton method is too expensive to apply operationally in meteorological forecasting, and various approximations are made in order to reduce computational costs and to solve the problems in real time. Here we investigate the effects on the convergence of the Gauss–Newton method of two types of approximation used commonly in data assimilation. First, we examine “truncated” Gauss–Newton methods where the inner linear least squares problem is not solved exactly, and second, we examine “perturbed” Gauss–Newton methods where the true linearized inner problem is approximated by a simplified, or perturbed, linear least squares problem. We give conditions ensuring that the truncated and perturbed Gauss–Newton methods converge and also derive rates of convergence for the iterations. The results are illustrated by a simple numerical example. A practical application to the problem of data assimilation in a typical meteorological system is presented.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider the linear equality-constrained least squares problem (LSE) of minimizing ${\|c - Gx\|}_2 $, subject to the constraint $Ex = p$. A preconditioned conjugate gradient method is applied to the Kuhn–Tucker equations associated with the LSE problem. We show that our method is well suited for structural optimization problems in reliability analysis and optimal design. Numerical tests are performed on an Alliant FX/8 multiprocessor and a Cray-X-MP using some practical structural analysis data.