50 resultados para neural network technique


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study describes a simple method for long-term establishment of human ovarian tumor lines and prediction of T-cell epitopes that could be potentially useful in the generation of tumor-specific cytotoxic T lymphocytes (CTLs), Nine ovarian tumor lines (INT.Ov) were generated from solid primary or metastatic tumors as well as from ascitic fluid, Notably all lines expressed HLA class I, intercellular adhesion molecule-1 (ICAM-1), polymorphic epithelial mucin (PEM) and cytokeratin (CK), but not HLA class II, B7.1 (CD80) or BAGE, While of the 9 lines tested 4 (INT.Ov1, 2, 5 and 6) expressed the folate receptor (FR-alpha) and 6 (INT.Ov1, 2, 5, 6, 7 and 9) expressed the epidermal growth factor receptor (EGFR); MAGE-1 and p185(HER-2/neu) were only found in 2 lines (INT.Ov1 and 2) and GAGE-1 expression in 1 line (INT.Ov2). The identification of class I MHC ligands and T-cell epitopes within protein antigens was achieved by applying several theoretical methods including: 1) similarity or homology searches to MHCPEP; 2) BIMAS and 3) artificial neural network-based predictions of proteins MACE, GAGE, EGFR, p185(HER-2/neu) and FR-alpha expressed in INT.Ov lines, Because of the high frequency of expression of some of these proteins in ovarian cancer and the ability to determine HLA binding peptides efficiently, it is expected that after appropriate screening, a large cohort of ovarian cancer patients may become candidates to receive peptide based vaccines. (C) 1997 Wiley-Liss, Inc.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Combinatorial optimization problems share an interesting property with spin glass systems in that their state spaces can exhibit ultrametric structure. We use sampling methods to analyse the error surfaces of feedforward multi-layer perceptron neural networks learning encoder problems. The third order statistics of these points of attraction are examined and found to be arranged in a highly ultrametric way. This is a unique result for a finite, continuous parameter space. The implications of this result are discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A central problem in visual perception concerns how humans perceive stable and uniform object colors despite variable lighting conditions (i.e. color constancy). One solution is to 'discount' variations in lighting across object surfaces by encoding color contrasts, and utilize this information to 'fill in' properties of the entire object surface. Implicit in this solution is the caveat that the color contrasts defining object boundaries must be distinguished from the spurious color fringes that occur naturally along luminance-defined edges in the retinal image (i.e. optical chromatic aberration). In the present paper, we propose that the neural machinery underlying color constancy is complemented by an 'error-correction' procedure which compensates for chromatic aberration, and suggest that error-correction may be linked functionally to the experimentally induced illusory colored aftereffects known as McCollough effects (MEs). To test these proposals, we develop a neural network model which incorporates many of the receptive-field (RF) profiles of neurons in primate color vision. The model is composed of two parallel processing streams which encode complementary sets of stimulus features: one stream encodes color contrasts to facilitate filling-in and color constancy; the other stream selectively encodes (spurious) color fringes at luminance boundaries, and learns to inhibit the filling-in of these colors within the first stream. Computer simulations of the model illustrate how complementary color-spatial interactions between error-correction and filling-in operations (a) facilitate color constancy, (b) reveal functional links between color constancy and the ME, and (c) reconcile previously reported anomalies in the local (edge) and global (spreading) properties of the ME. We discuss the broader implications of these findings by considering the complementary functional roles performed by RFs mediating color-spatial interactions in the primate visual system. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The long short-term memory (LSTM) is not the only neural network which learns a context sensitive language. Second-order sequential cascaded networks (SCNs) are able to induce means from a finite fragment of a context-sensitive language for processing strings outside the training set. The dynamical behavior of the SCN is qualitatively distinct from that observed in LSTM networks. Differences in performance and dynamics are discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent work by Siegelmann has shown that the computational power of recurrent neural networks matches that of Turing Machines. One important implication is that complex language classes (infinite languages with embedded clauses) can be represented in neural networks. Proofs are based on a fractal encoding of states to simulate the memory and operations of stacks. In the present work, it is shown that similar stack-like dynamics can be learned in recurrent neural networks from simple sequence prediction tasks. Two main types of network solutions are found and described qualitatively as dynamical systems: damped oscillation and entangled spiraling around fixed points. The potential and limitations of each solution type are established in terms of generalization on two different context-free languages. Both solution types constitute novel stack implementations - generally in line with Siegelmann's theoretical work - which supply insights into how embedded structures of languages can be handled in analog hardware.