758 resultados para neural network model
Resumo:
Il tumore al seno si colloca al primo posto per livello di mortalità tra le patologie tumorali che colpiscono la popolazione femminile mondiale. Diversi studi clinici hanno dimostrato come la diagnosi da parte del radiologo possa essere aiutata e migliorata dai sistemi di Computer Aided Detection (CAD). A causa della grande variabilità di forma e dimensioni delle masse tumorali e della somiglianza di queste con i tessuti che le ospitano, la loro ricerca automatizzata è un problema estremamente complicato. Un sistema di CAD è generalmente composto da due livelli di classificazione: la detection, responsabile dell’individuazione delle regioni sospette presenti sul mammogramma (ROI) e quindi dell’eliminazione preventiva delle zone non a rischio; la classificazione vera e propria (classification) delle ROI in masse e tessuto sano. Lo scopo principale di questa tesi è lo studio di nuove metodologie di detection che possano migliorare le prestazioni ottenute con le tecniche tradizionali. Si considera la detection come un problema di apprendimento supervisionato e lo si affronta mediante le Convolutional Neural Networks (CNN), un algoritmo appartenente al deep learning, nuova branca del machine learning. Le CNN si ispirano alle scoperte di Hubel e Wiesel riguardanti due tipi base di cellule identificate nella corteccia visiva dei gatti: le cellule semplici (S), che rispondono a stimoli simili ai bordi, e le cellule complesse (C) che sono localmente invarianti all’esatta posizione dello stimolo. In analogia con la corteccia visiva, le CNN utilizzano un’architettura profonda caratterizzata da strati che eseguono sulle immagini, alternativamente, operazioni di convoluzione e subsampling. Le CNN, che hanno un input bidimensionale, vengono solitamente usate per problemi di classificazione e riconoscimento automatico di immagini quali oggetti, facce e loghi o per l’analisi di documenti.
Resumo:
This work is focused on the analysis of sea–level change (last century), based mainly on instrumental observations. During this period, individual components of sea–level change are investigated, both at global and regional scales. Some of the geophysical processes responsible for current sea-level change such as glacial isostatic adjustments and current melting terrestrial ice sources, have been modeled and compared with observations. A new value of global mean sea level change based of tide gauges observations has been independently assessed in 1.5 mm/year, using corrections for glacial isostatic adjustment obtained with different models as a criterion for the tide gauge selection. The long wavelength spatial variability of the main components of sea–level change has been investigated by means of traditional and new spectral methods. Complex non–linear trends and abrupt sea–level variations shown by tide gauges records have been addressed applying different approaches to regional case studies. The Ensemble Empirical Mode Decomposition technique has been used to analyse tide gauges records from the Adriatic Sea to ascertain the existence of cyclic sea-level variations. An Early Warning approach have been adopted to detect tipping points in sea–level records of North East Pacific and their relationship with oceanic modes. Global sea–level projections to year 2100 have been obtained by a semi-empirical approach based on the artificial neural network method. In addition, a model-based approach has been applied to the case of the Mediterranean Sea, obtaining sea-level projection to year 2050.
Resumo:
Many of developing countries are facing crisis in water management due to increasing of population, water scarcity, water contaminations and effects of world economic crisis. Water distribution systems in developing countries are facing many challenges of efficient repair and rehabilitation since the information of water network is very limited, which makes the rehabilitation assessment plans very difficult. Sufficient information with high technology in developed countries makes the assessment for rehabilitation easy. Developing countries have many difficulties to assess the water network causing system failure, deterioration of mains and bad water quality in the network due to pipe corrosion and deterioration. The limited information brought into focus the urgent need to develop economical assessment for rehabilitation of water distribution systems adapted to water utilities. Gaza Strip is subject to a first case study, suffering from severe shortage in the water supply and environmental problems and contamination of underground water resources. This research focuses on improvement of water supply network to reduce the water losses in water network based on limited database using techniques of ArcGIS and commercial water network software (WaterCAD). A new approach for rehabilitation water pipes has been presented in Gaza city case study. Integrated rehabilitation assessment model has been developed for rehabilitation water pipes including three components; hydraulic assessment model, Physical assessment model and Structural assessment model. WaterCAD model has been developed with integrated in ArcGIS to produce the hydraulic assessment model for water network. The model have been designed based on pipe condition assessment with 100 score points as a maximum points for pipe condition. As results from this model, we can indicate that 40% of water pipeline have score points less than 50 points and about 10% of total pipes length have less than 30 score points. By using this model, the rehabilitation plans for each region in Gaza city can be achieved based on available budget and condition of pipes. The second case study is Kuala Lumpur Case from semi-developed countries, which has been used to develop an approach to improve the water network under crucial conditions using, advanced statistical and GIS techniques. Kuala Lumpur (KL) has water losses about 40% and high failure rate, which make severe problem. This case can represent cases in South Asia countries. Kuala Lumpur faced big challenges to reduce the water losses in water network during last 5 years. One of these challenges is high deterioration of asbestos cement (AC) pipes. They need to replace more than 6500 km of AC pipes, which need a huge budget to be achieved. Asbestos cement is subject to deterioration due to various chemical processes that either leach out the cement material or penetrate the concrete to form products that weaken the cement matrix. This case presents an approach for geo-statistical model for modelling pipe failures in a water distribution network. Database of Syabas Company (Kuala Lumpur water company) has been used in developing the model. The statistical models have been calibrated, verified and used to predict failures for both networks and individual pipes. The mathematical formulation developed for failure frequency in Kuala Lumpur was based on different pipeline characteristics, reflecting several factors such as pipe diameter, length, pressure and failure history. Generalized linear model have been applied to predict pipe failures based on District Meter Zone (DMZ) and individual pipe levels. Based on Kuala Lumpur case study, several outputs and implications have been achieved. Correlations between spatial and temporal intervals of pipe failures also have been done using ArcGIS software. Water Pipe Assessment Model (WPAM) has been developed using the analysis of historical pipe failure in Kuala Lumpur which prioritizing the pipe rehabilitation candidates based on ranking system. Frankfurt Water Network in Germany is the third main case study. This case makes an overview for Survival analysis and neural network methods used in water network. Rehabilitation strategies of water pipes have been developed for Frankfurt water network in cooperation with Mainova (Frankfurt Water Company). This thesis also presents a methodology of technical condition assessment of plastic pipes based on simple analysis. This thesis aims to make contribution to improve the prediction of pipe failures in water networks using Geographic Information System (GIS) and Decision Support System (DSS). The output from the technical condition assessment model can be used to estimate future budget needs for rehabilitation and to define pipes with high priority for replacement based on poor condition. rn
Resumo:
Epileptic seizures are the manifestations of epilepsy, which is a major neurological disorder and occurs with a high incidence during early childhood. A fundamental mechanism underlying epileptic seizures is loss of balance between neural excitation and inhibition toward overexcitation. Glycine receptor (GlyR) is ionotropic neurotransmitter receptor that upon binding of glycine opens an anion pore and mediates in the adult nervous system a consistent inhibitory action. While previously it was assumed that GlyRs mediate inhibition mainly in the brain stem and spinal cord, recent studies reported the abundant expression of GlyRs throughout the brain, in particular during neuronal development. But no information is available regarding whether activation of GlyRs modulates neural network excitability and epileptiform activities in the immature central nervous system (CNS). Therefore the study in this thesis addresses the role of GlyRs in the modulation of neuronal excitability and epileptiform activity in the immature rat brain. By using in vitro intact corticohippocampal formation (CHF) of rats at postnatal days 4-7 and electrophysiological methods, a series of pharmacological examinations reveal that GlyRs are directly implicated in the control of hippocampal excitation levels at this age. In this thesis I am able to show that GlyRs are functionally expressed in the immature hippocampus and exhibit the classical pharmacology of GlyR, which can be activated by both glycine and the presumed endogenous agonist taurine. This study also reveals that high concentration of taurine is anticonvulsive, but lower concentration of taurine is proconvulsive. A substantial fraction of both the pro- and anticonvulsive effects of taurine is mediated via GlyRs, although activation of GABAA receptors also considerably contributes to the taurine effects. Similarly, glycine exerts both pro- and anticonvulsive effects at low and high concentrations, respectively. The proconvulsive effects of taurine and glycine depend on NKCC1-mediated Cl- accumulation, as bath application of NKCC1 inhibitor bumetanide completely abolishes proconvulsive effects of low taurine and glycine concentrations. Inhibition of GlyRs with low concentration of strychnine triggers epileptiform activity in the CA3 region of immature CHF, indicating that intrinsically an inhibitory action of GlyRs overwhelms its depolarizing action in the immature hippocampus. Additionally, my study indicates that blocking taurine transporters to accumulate endogenous taurine reduces epileptiform activity via activation of GABAA receptors, but not GlyRs, while blocking glycine transporters has no observable effect on epileptiform activity. From the main results of this study it can be concluded that in the immature rat hippocampus, activation of GlyRs mediates both pro- and anticonvulsive effects, but that a persistent activation of GlyRs is required to prevent intrinic neuronal overexcitability. In summary, this study uncovers an important role of GlyRs in the modulation of neuronal excitability and epileptiform activity in the immature rat hippocampus, and indicates that glycinergic system can potentially be a new therapeutic target against epileptic seizures of children.
Resumo:
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
Resumo:
The Default Mode Network (DMN) is a higher order functional neural network that displays activation during passive rest and deactivation during many types of cognitive tasks. Accordingly, the DMN is viewed to represent the neural correlate of internally-generated self-referential cognition. This hypothesis implies that the DMN requires the involvement of cognitive processes, like declarative memory. The present study thus examines the spatial and functional convergence of the DMN and the semantic memory system. Using an active block-design functional Magnetic Resonance Imaging (fMRI) paradigm and Independent Component Analysis (ICA), we trace the DMN and fMRI signal changes evoked by semantic, phonological and perceptual decision tasks upon visually-presented words. Our findings show less deactivation during semantic compared to the two non-semantic tasks for the entire DMN unit and within left-hemispheric DMN regions, i.e., the dorsal medial prefrontal cortex, the anterior cingulate cortex, the retrosplenial cortex, the angular gyrus, the middle temporal gyrus and the anterior temporal region, as well as the right cerebellum. These results demonstrate that well-known semantic regions are spatially and functionally involved in the DMN. The present study further supports the hypothesis of the DMN as an internal mentation system that involves declarative memory functions.
Resumo:
Introduction: Advances in biotechnology have shed light on many biological processes. In biological networks, nodes are used to represent the function of individual entities within a system and have historically been studied in isolation. Network structure adds edges that enable communication between nodes. An emerging fieldis to combine node function and network structure to yield network function. One of the most complex networks known in biology is the neural network within the brain. Modeling neural function will require an understanding of networks, dynamics, andneurophysiology. It is with this work that modeling techniques will be developed to work at this complex intersection. Methods: Spatial game theory was developed by Nowak in the context of modeling evolutionary dynamics, or the way in which species evolve over time. Spatial game theory offers a two dimensional view of analyzingthe state of neighbors and updating based on the surroundings. Our work builds upon this foundation by studying evolutionary game theory networks with respect to neural networks. This novel concept is that neurons may adopt a particular strategy that will allow propagation of information. The strategy may therefore act as the mechanism for gating. Furthermore, the strategy of a neuron, as in a real brain, isimpacted by the strategy of its neighbors. The techniques of spatial game theory already established by Nowak are repeated to explain two basic cases and validate the implementation of code. Two novel modifications are introduced in Chapters 3 and 4 that build on this network and may reflect neural networks. Results: The introduction of two novel modifications, mutation and rewiring, in large parametricstudies resulted in dynamics that had an intermediate amount of nodes firing at any given time. Further, even small mutation rates result in different dynamics more representative of the ideal state hypothesized. Conclusions: In both modificationsto Nowak's model, the results demonstrate the network does not become locked into a particular global state of passing all information or blocking all information. It is hypothesized that normal brain function occurs within this intermediate range and that a number of diseases are the result of moving outside of this range.
Resumo:
This thesis is focused on the control of a system with recycle. A new control strategy using neural network combined with PID controller was proposed. The combined controller was studied and tested on the pressure control of a vaporizer inside a para-xylene production process. The major problems are the negative effects of recycle and the delays on instability and performance. The neural network was designed to move the process close to the set points while the PID accomplishes the finer level of disturbance rejection and offset reductions. Our simulation results show that during control, the neural network was able to determine the nonlinear relationship between steady state and manipulated variables. The results also show the disturbance rejection was handled by PID controller effectively.
Resumo:
Quantitative characterisation of carotid atherosclerosis and classification into symptomatic or asymptomatic is crucial in planning optimal treatment of atheromatous plaque. The computer-aided diagnosis (CAD) system described in this paper can analyse ultrasound (US) images of carotid artery and classify them into symptomatic or asymptomatic based on their echogenicity characteristics. The CAD system consists of three modules: a) the feature extraction module, where first-order statistical (FOS) features and Laws' texture energy can be estimated, b) the dimensionality reduction module, where the number of features can be reduced using analysis of variance (ANOVA), and c) the classifier module consisting of a neural network (NN) trained by a novel hybrid method based on genetic algorithms (GAs) along with the back propagation algorithm. The hybrid method is able to select the most robust features, to adjust automatically the NN architecture and to optimise the classification performance. The performance is measured by the accuracy, sensitivity, specificity and the area under the receiver-operating characteristic (ROC) curve. The CAD design and development is based on images from 54 symptomatic and 54 asymptomatic plaques. This study demonstrates the ability of a CAD system based on US image analysis and a hybrid trained NN to identify atheromatous plaques at high risk of stroke.
Resumo:
This tutorial gives a step by step explanation of how one uses experimental data to construct a biologically realistic multicompartmental model. Special emphasis is given on the many ways that this process can be imprecise. The tutorial is intended for both experimentalists who want to get into computer modeling and for computer scientists who use abstract neural network models but are curious about biological realistic modeling. The tutorial is not dependent on the use of a specific simulation engine, but rather covers the kind of data needed for constructing a model, how they are used, and potential pitfalls in the process.
Resumo:
Neural Networks as Cybernetic Systems is a textbox that combines classical systems theory with artificial neural network technology.
Resumo:
Neural Networks as Cybernetic Systems is a textbox that combines classical systems theory with artificial neural network technology.
Resumo:
Simulation tools aid in learning neuroscience by providing the student with an interactive environment to carry out simulated experiments and test hypotheses. The field of neuroscience is well suited for the use of simulation tools since nerve cell signaling can be described by mathematical equations and solved by computer. Neural signaling entails the propagation of electrical current along nerve membrane and transmission to neighboring neurons through synaptic connections. Action potentials and synaptic transmission can be simulated and results displayed for visualization and analysis. The neurosimulator SNNAP (Simulator for Neural Networks and Action Potentials) is a simulation environment that provides users with editors for model building, simulator engine and visual display editor. This paper presents several modeling examples that illustrate some of the capabilities and features of SNNAP. First, the Hodgkin-Huxley (HH) model is presented and the threshold phenomenon is illustrated. Second, small neural networks are described with HH models using various synaptic connections available with SNNAP. Synaptic connections may be modulated through facilitation or depression with SNNAP. A study of vesicle pool dynamics is presented using an AMPA receptor model. Finally, a central pattern generator model of the Aplysia feeding circuit is illustrated as an example of a complex network that may be studied with SNNAP. Simulation code is provided for each case study described and tasks are suggested for further investigation.
Resumo:
Neural Networks as Cybernetic Systems is a textbox that combines classical systems theory with artificial neural network technology. This third edition essentially compares with the 2nd one, but has been improved by correction of errors and by a rearrangement and minor expansion of the sections referring to recurrent networks. These changes hopefully allow for an easier comprehension of the essential aspects of this important domain that has received growing attention during the last years.
Resumo:
eural Networks as Cybernetic Systems is a textbox that combines classical systems theory with artificial neural network technology. This third edition essentially compares with the 2nd one, but has been improved by correction of errors and by a rearrangement and minor expansion of the sections referring to recurrent networks. These changes hopefully allow for an easier comprehension of the essential aspects of this important domain that has received growing attention during the last years.