4 resultados para inversor MLP

em University of Queensland eSpace - Australia


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We describe the genomic organization of a recently identified CC chemokine, MIP3 alpha /CCL20 (HGMW-approved symbol SCYA20). The MIP-3 alpha /CCL20 gene was cloned and sequenced, revealing a four exon, three intron structure, and was localized by FISK analysis to 2q35-q36. Two distinct cDNAs were identified, encoding two forms of MIP-3 alpha /CCL20, Ala MLP-3 alpha /CCL20 and Ser MIP-3 alpha /CCL20, that differ by one amino acid at the predicted signal peptide cleavage site. Examination of the sequence around the boundary of intron 1 and exon 2 showed that use of alternative splice acceptor sites could give rise to Ata MIP-3 alpha /CCL20 or Ser MIP-3 alpha /CCL20. Both forms of MIP-3cr/CCL20 were chemically synthesized and tested for biological activity. Both flu antigen plus IL-a-activated CD4(+) and CD8(+) T lymphoblasts and cord blood-derived dendritic cells responded to Ser and Ala MIP-3 alpha /CCL20. T lymphocytes exposed only to IL-2 responded inconsistently, while no response was detected in naive T lymphocytes, monocytes, or neutrophils. The biological activity of Ser MIP-3 alpha /CCL20 and Ala MIP-3 alpha /CCL20 and the tissue-specific preference of different splice acceptor sites are not yet known. (C) 2001 Academic Press.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The expectation-maximization (EM) algorithm has been of considerable interest in recent years as the basis for various algorithms in application areas of neural networks such as pattern recognition. However, there exists some misconceptions concerning its application to neural networks. In this paper, we clarify these misconceptions and consider how the EM algorithm can be adopted to train multilayer perceptron (MLP) and mixture of experts (ME) networks in applications to multiclass classification. We identify some situations where the application of the EM algorithm to train MLP networks may be of limited value and discuss some ways of handling the difficulties. For ME networks, it is reported in the literature that networks trained by the EM algorithm using iteratively reweighted least squares (IRLS) algorithm in the inner loop of the M-step, often performed poorly in multiclass classification. However, we found that the convergence of the IRLS algorithm is stable and that the log likelihood is monotonic increasing when a learning rate smaller than one is adopted. Also, we propose the use of an expectation-conditional maximization (ECM) algorithm to train ME networks. Its performance is demonstrated to be superior to the IRLS algorithm on some simulated and real data sets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fast Classification (FC) networks were inspired by a biologically plausible mechanism for short term memory where learning occurs instantaneously. Both weights and the topology for an FC network are mapped directly from the training samples by using a prescriptive training scheme. Only two presentations of the training data are required to train an FC network. Compared with iterative learning algorithms such as Back-propagation (which may require many hundreds of presentations of the training data), the training of FC networks is extremely fast and learning convergence is always guaranteed. Thus FC networks may be suitable for applications where real-time classification is needed. In this paper, the FC networks are applied for the real-time extraction of gene expressions for Chlamydia microarray data. Both the classification performance and learning time of the FC networks are compared with the Multi-Layer Proceptron (MLP) networks and support-vector-machines (SVM) in the same classification task. The FC networks are shown to have extremely fast learning time and comparable classification accuracy.