51 resultados para Wavelet neural network
Resumo:
Background Schizophrenia has been associated with semantic memory impairment and previous studies report a difficulty in accessing semantic category exemplars (Moelter et al. 2005 Schizophr Res 78:209–217). The anterior temporal cortex (ATC) has been implicated in the representation of semantic knowledge (Rogers et al. 2004 Psychol Rev 111(1):205–235). We conducted a high-field (4T) fMRI study with the Category Judgment and Substitution Task (CJAST), an analogue of the Hayling test. We hypothesised that differential activation of the temporal lobe would be observed in schizophrenia patients versus controls. Methods Eight schizophrenia patients (7M : 1F) and eight matched controls performed the CJAST, involving a randomised series of 55 common nouns (from five semantic categories) across three conditions: semantic categorisation, anomalous categorisation and word reading. High-resolution 3D T1-weighted images and GE EPI with BOLD contrast and sparse temporal sampling were acquired on a 4T Bruker MedSpec system. Image processing and analyses were performed with SPM2. Results Differential activation in the left ATC was found for anomalous categorisation relative to category judgment, in patients versus controls. Conclusions We examined semantic memory deficits in schizophrenia using a novel fMRI task. Since the ATC corresponds to an area involved in accessing abstract semantic representations (Moelter et al. 2005), these results suggest schizophrenia patients utilise the same neural network as healthy controls, however it is compromised in the patients and the different ATC activity might be attributable to weakening of category-to-category associations.
Resumo:
This study describes a simple method for long-term establishment of human ovarian tumor lines and prediction of T-cell epitopes that could be potentially useful in the generation of tumor-specific cytotoxic T lymphocytes (CTLs), Nine ovarian tumor lines (INT.Ov) were generated from solid primary or metastatic tumors as well as from ascitic fluid, Notably all lines expressed HLA class I, intercellular adhesion molecule-1 (ICAM-1), polymorphic epithelial mucin (PEM) and cytokeratin (CK), but not HLA class II, B7.1 (CD80) or BAGE, While of the 9 lines tested 4 (INT.Ov1, 2, 5 and 6) expressed the folate receptor (FR-alpha) and 6 (INT.Ov1, 2, 5, 6, 7 and 9) expressed the epidermal growth factor receptor (EGFR); MAGE-1 and p185(HER-2/neu) were only found in 2 lines (INT.Ov1 and 2) and GAGE-1 expression in 1 line (INT.Ov2). The identification of class I MHC ligands and T-cell epitopes within protein antigens was achieved by applying several theoretical methods including: 1) similarity or homology searches to MHCPEP; 2) BIMAS and 3) artificial neural network-based predictions of proteins MACE, GAGE, EGFR, p185(HER-2/neu) and FR-alpha expressed in INT.Ov lines, Because of the high frequency of expression of some of these proteins in ovarian cancer and the ability to determine HLA binding peptides efficiently, it is expected that after appropriate screening, a large cohort of ovarian cancer patients may become candidates to receive peptide based vaccines. (C) 1997 Wiley-Liss, Inc.
Resumo:
Combinatorial optimization problems share an interesting property with spin glass systems in that their state spaces can exhibit ultrametric structure. We use sampling methods to analyse the error surfaces of feedforward multi-layer perceptron neural networks learning encoder problems. The third order statistics of these points of attraction are examined and found to be arranged in a highly ultrametric way. This is a unique result for a finite, continuous parameter space. The implications of this result are discussed.
Resumo:
A central problem in visual perception concerns how humans perceive stable and uniform object colors despite variable lighting conditions (i.e. color constancy). One solution is to 'discount' variations in lighting across object surfaces by encoding color contrasts, and utilize this information to 'fill in' properties of the entire object surface. Implicit in this solution is the caveat that the color contrasts defining object boundaries must be distinguished from the spurious color fringes that occur naturally along luminance-defined edges in the retinal image (i.e. optical chromatic aberration). In the present paper, we propose that the neural machinery underlying color constancy is complemented by an 'error-correction' procedure which compensates for chromatic aberration, and suggest that error-correction may be linked functionally to the experimentally induced illusory colored aftereffects known as McCollough effects (MEs). To test these proposals, we develop a neural network model which incorporates many of the receptive-field (RF) profiles of neurons in primate color vision. The model is composed of two parallel processing streams which encode complementary sets of stimulus features: one stream encodes color contrasts to facilitate filling-in and color constancy; the other stream selectively encodes (spurious) color fringes at luminance boundaries, and learns to inhibit the filling-in of these colors within the first stream. Computer simulations of the model illustrate how complementary color-spatial interactions between error-correction and filling-in operations (a) facilitate color constancy, (b) reveal functional links between color constancy and the ME, and (c) reconcile previously reported anomalies in the local (edge) and global (spreading) properties of the ME. We discuss the broader implications of these findings by considering the complementary functional roles performed by RFs mediating color-spatial interactions in the primate visual system. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
The long short-term memory (LSTM) is not the only neural network which learns a context sensitive language. Second-order sequential cascaded networks (SCNs) are able to induce means from a finite fragment of a context-sensitive language for processing strings outside the training set. The dynamical behavior of the SCN is qualitatively distinct from that observed in LSTM networks. Differences in performance and dynamics are discussed.
Resumo:
Recent work by Siegelmann has shown that the computational power of recurrent neural networks matches that of Turing Machines. One important implication is that complex language classes (infinite languages with embedded clauses) can be represented in neural networks. Proofs are based on a fractal encoding of states to simulate the memory and operations of stacks. In the present work, it is shown that similar stack-like dynamics can be learned in recurrent neural networks from simple sequence prediction tasks. Two main types of network solutions are found and described qualitatively as dynamical systems: damped oscillation and entangled spiraling around fixed points. The potential and limitations of each solution type are established in terms of generalization on two different context-free languages. Both solution types constitute novel stack implementations - generally in line with Siegelmann's theoretical work - which supply insights into how embedded structures of languages can be handled in analog hardware.