773 resultados para Fuzzy min-max neural network
Resumo:
Introduction: Cette étude a pour but de déterminer la fréquence de survenue de l'arrêt cardio-respiratoire (ACR) au cabinet médical qui constitue un élément de décision quant à la justification de la présence d'un défibrillateur semi-automatique (DSA) au cabinet médical. Matériel et Méthode: Analyse rétrospective des fiches d'intervention pré-hospitalière des ambulances et des SMUR (Service Mobile d'Urgence et de Réanimation) du canton de Vaud (650'000 habitants) entre 2003 et 2006 qui relataient un ACR. Les variables suivantes ont été analysées: chronologie de l'intervention, mesures de réanimation cardio-pulmonaire (RCP) appliquées, diagnostic présumé, suivi à 48 heures. Résultats: 17 ACR (9 _, 8 _) ont eu lieu dans les 1655 cabinets médicaux du canton de Vaud en 4 ans sur un total de 1753 ACR extrahospitaliers, soit 1% de ces derniers. Tous ont motivés une intervention simultanée d'une ambulance et d'un SMUR. L'âge moyen était de 70 ans. Le délai entre l'ACR et l'arrivée sur site d'un DSA était en moyenne de plus de 10 minutes (min-max: 4-25 minutes). Dans 13 cas évaluables, une RCP était en cours à l'arrivée des renforts, mais seulement 7 étaient qualifiées d'efficaces. Le rythme initial était une fibrillation ventriculaire (FV) dans 8 cas et ont tous reçu un choc électrique externe (CEE), dont 1 avant l'arrivée des secours administré dans un cabinet équipé d'un DSA. Le diagnostic était disponible pour 9 cas: 6 cardiopathies, 1 embolie pulmonaire massive, 1 choc anaphylactique et 1 tentamen médicamenteux. Le devenir de ces patients a été marqué par 6 décès sur site, 4 décès à l'admission à l'hôpital et 7 vivants à 48 heures. Les données ne permettent pas d'avoir un suivi ni à la sortie de l'hôpital ni ultérieurement. Conclusions: Bien que la survenue d'un ACR soit très rare au cabinet médical, il mérite une anticipation particulière de la part du médecin. En effet, le délai d'arrivée des services d'urgences nécessite la mise en oeuvre immédiate de mesures par le médecin. En outre, comme professionnel de la santé, il se doit d'intégrer la chaîne de survie en procédant à une alarme précoce du 144 et initier des gestes de premier secours («Basic Life Support»). La présence d'un DSA pourrait être envisagée en fonction notamment de l'éloignement de secours professionnels équipés d'un DSA.
Resumo:
OBJECTIVE. The main goal of this paper is to obtain a classification model based on feed-forward multilayer perceptrons in order to improve postpartum depression prediction during the 32 weeks after childbirth with a high sensitivity and specificity and to develop a tool to be integrated in a decision support system for clinicians. MATERIALS AND METHODS. Multilayer perceptrons were trained on data from 1397 women who had just given birth, from seven Spanish general hospitals, including clinical, environmental and genetic variables. A prospective cohort study was made just after delivery, at 8 weeks and at 32 weeks after delivery. The models were evaluated with the geometric mean of accuracies using a hold-out strategy. RESULTS. Multilayer perceptrons showed good performance (high sensitivity and specificity) as predictive models for postpartum depression. CONCLUSIONS. The use of these models in a decision support system can be clinically evaluated in future work. The analysis of the models by pruning leads to a qualitative interpretation of the influence of each variable in the interest of clinical protocols.
Resumo:
Purpose: Combined antiretroviral therapy has dramatically improved HIV-infected individuals survival. Long-term strategies are currently needed to achieve the goal of durable virologic suppression. However, long-term available data for specific antiretrovirals (ARV) are limited. In clinical trials, boosted atazanavir (ATV/r) regimens has shown good efficacy and tolerability in ARV-naïve patients for up to 4 years. The REMAIN study aimed to evaluate the long-term outcomes of ATV/r regimens in ARV-naïve patients in a real life setting. Methods: Non-comparative, observational study conducted in Germany, Portugal and Spain. Historical and longitudinal follow-up data was extracted six monthly from the medical record of HIV-infected, treatment-naïve patients, who initiated an ATV/r-regimen between 2008 and 2010. The primary endpoint was the proportion of patients remaining on ATV treatment over time. Secondary endpoints included virologic response (HIV-1 RNA <50 c/mL and <500 c/mL), reasons for discontinuation and long-term safety. The duration of treatment and time to virologic failure (VF) were analyzed using the Kaplan- Meier method. Data from an interim analysis including patients with at least one year of follow-up are reported here. Results: A total of 411 patients were included in this interim analysis [median (Q1, Q3) follow-up: 23.42 (16.25, 32.24) months≥: 77% male; median age 40 years [min, max: 19, 78≥; 16% IDUs; 18% CDC C; 18% hepatitis C. TDF/FTC was the most common backbone (85%). At baseline, median (Q1, Q3) HIV-RNA and CD4 cell count were 4.91 (4.34, 5.34) log10 c/mL and 256 (139, 353) cells/mm3, respectively. The probability of remaining on treatment was 0.84 (95% CI: 0.80, 0.87) and 0.72 (95% CI: 0.67, 0.76) for the first and second year, respectively. After 2 years of follow-up, 84% (95% CI: 0.79, 0.88) of patients were virologically suppressed (<50 c/mL). No major protease inhibitors mutations were observed at VF. Overall, 125 patients (30%) discontinued ATV therapy [median (Q1, Q3) time to discontinuation: 11.14 (6.24, 19.35) months]. Adverse events (AEs) were the main reason for discontinuation (n =47, 11%). Hyperbilirubinaemia was the most common AE leading to discontinuation (14 patients). No unexpected AEs were reported. Conclusions: In a real life clinical setting, ATV/r regimens showed durable virologic efficacy with good tolerability in an ARV-naïve population. Data from longer follow-up will provide additional valuable information.
Resumo:
In the first part of this research, three stages were stated for a program to increase the information extracted from ink evidence and maximise its usefulness to the criminal and civil justice system. These stages are (a) develop a standard methodology for analysing ink samples by high-performance thin layer chromatography (HPTLC) in reproducible way, when ink samples are analysed at different time, locations and by different examiners; (b) compare automatically and objectively ink samples; and (c) define and evaluate theoretical framework for the use of ink evidence in forensic context. This report focuses on the second of the three stages. Using the calibration and acquisition process described in the previous report, mathematical algorithms are proposed to automatically and objectively compare ink samples. The performances of these algorithms are systematically studied for various chemical and forensic conditions using standard performance tests commonly used in biometrics studies. The results show that different algorithms are best suited for different tasks. Finally, this report demonstrates how modern analytical and computer technology can be used in the field of ink examination and how tools developed and successfully applied in other fields of forensic science can help maximising its impact within the field of questioned documents.
Resumo:
BACKGROUND: Co-morbidity information derived from administrative data needs to be validated to allow its regular use. We assessed evolution in the accuracy of coding for Charlson and Elixhauser co-morbidities at three time points over a 5-year period, following the introduction of the International Classification of Diseases, 10th Revision (ICD-10), coding of hospital discharges.METHODS: Cross-sectional time trend evaluation study of coding accuracy using hospital chart data of 3'499 randomly selected patients who were discharged in 1999, 2001 and 2003, from two teaching and one non-teaching hospital in Switzerland. We measured sensitivity, positive predictive and Kappa values for agreement between administrative data coded with ICD-10 and chart data as the 'reference standard' for recording 36 co-morbidities.RESULTS: For the 17 the Charlson co-morbidities, the sensitivity - median (min-max) - was 36.5% (17.4-64.1) in 1999, 42.5% (22.2-64.6) in 2001 and 42.8% (8.4-75.6) in 2003. For the 29 Elixhauser co-morbidities, the sensitivity was 34.2% (1.9-64.1) in 1999, 38.6% (10.5-66.5) in 2001 and 41.6% (5.1-76.5) in 2003. Between 1999 and 2003, sensitivity estimates increased for 30 co-morbidities and decreased for 6 co-morbidities. The increase in sensitivities was statistically significant for six conditions and the decrease significant for one. Kappa values were increased for 29 co-morbidities and decreased for seven.CONCLUSIONS: Accuracy of administrative data in recording clinical conditions improved slightly between 1999 and 2003. These findings are of relevance to all jurisdictions introducing new coding systems, because they demonstrate a phenomenon of improved administrative data accuracy that may relate to a coding 'learning curve' with the new coding system.
Resumo:
In this paper we study the relevance of multiple kernel learning (MKL) for the automatic selection of time series inputs. Recently, MKL has gained great attention in the machine learning community due to its flexibility in modelling complex patterns and performing feature selection. In general, MKL constructs the kernel as a weighted linear combination of basis kernels, exploiting different sources of information. An efficient algorithm wrapping a Support Vector Regression model for optimizing the MKL weights, named SimpleMKL, is used for the analysis. In this sense, MKL performs feature selection by discarding inputs/kernels with low or null weights. The approach proposed is tested with simulated linear and nonlinear time series (AutoRegressive, Henon and Lorenz series).
Resumo:
The amygdala is part of a neural network that contributes to the regulation of emotional behaviors. Rodents, especially rats, are used extensively as model organisms to decipher the functions of specific amygdala nuclei, in particular in relation to fear and emotional learning. Analysis of the role of the nonhuman primate amygdala in these functions has lagged work in the rodent but provides evidence for conservation of basic functions across species. Here we provide quantitative information regarding the morphological characteristics of the main amygdala nuclei in rats and monkeys, including neuron and glial cell numbers, neuronal soma size, and individual nuclei volumes. The volumes of the lateral, basal, and accessory basal nuclei were, respectively, 32, 39, and 39 times larger in monkeys than in rats. In contrast, the central and medial nuclei were only 8 and 4 times larger in monkeys than in rats. The numbers of neurons in the lateral, basal, and accessory basal nuclei were 14, 11, and 16 times greater in monkeys than in rats, whereas the numbers of neurons in the central and medial nuclei were only 2.3 and 1.5 times greater in monkeys than in rats. Neuron density was between 2.4 and 3.7 times lower in monkeys than in rats, whereas glial density was only between 1.1 and 1.7 times lower in monkeys than in rats. We compare our data in rats and monkeys with those previously published in humans and discuss the theoretical and functional implications that derive from our quantitative structural findings.
Resumo:
La sostenibilidad de los recursos marinos y de su ecosistema hace necesario un manejo responsable de las pesquerías. Conocer la distribución espacial del esfuerzo pesquero y en particular de las operaciones de pesca es indispensable para mejorar el monitoreo pesquero y el análisis de la vulnerabilidad de las especies frente a la pesca. Actualmente en la pesquería de anchoveta peruana, se recoge información del esfuerzo y capturas mediante un programa de observadores a bordo, pero esta solo representa una muestra de 2% del total de viajes pesqueros. Por otro lado, se dispone de información por cada hora (en promedio) de la posición de cada barco de la flota gracias al sistema de seguimiento satelital de las embarcaciones (VMS), aunque en estos no se señala cuándo ni dónde ocurrieron las calas. Las redes neuronales artificiales (ANN) podrían ser un método estadístico capaz de inferir esa información, entrenándose en una muestra para la cual sí conocemos las posiciones de calas (el 2% anteriormente referido), estableciendo relaciones analíticas entre las calas y ciertas características geométricas de las trayectorias observadas por el VMS y así, a partir de las últimas, identificar la posición de las operaciones de pesca. La aplicación de la red neuronal requiere un análisis previo que examine la sensibilidad de la red a variaciones en sus parámetros y bases de datos de entrenamiento, y que nos permita desarrollar criterios para definir la estructura de la red e interpretar sus resultados de manera adecuada. La problemática descrita en el párrafo anterior, aplicada específicamente a la anchoveta (Engraulis ringens) es detalllada en el primer capítulo, mientras que en el segundo se hace una revisión teórica de las redes neuronales. Luego se describe el proceso de construcción y pre-tratamiento de la base de datos, y definición de la estructura de la red previa al análisis de sensibilidad. A continuación se presentan los resultados para el análisis en los que obtenemos una estimación del 100% de calas, de las cuales aproximadamente 80% están correctamente ubicadas y 20% poseen un error de ubicación. Finalmente se discuten las fortalezas y debilidades de la técnica empleada, de métodos alternativos potenciales y de las perspectivas abiertas por este trabajo.
Resumo:
In this paper we present a Bayesian image reconstruction algorithm with entropy prior (FMAPE) that uses a space-variant hyperparameter. The spatial variation of the hyperparameter allows different degrees of resolution in areas of different statistical characteristics, thus avoiding the large residuals resulting from algorithms that use a constant hyperparameter. In the first implementation of the algorithm, we begin by segmenting a Maximum Likelihood Estimator (MLE) reconstruction. The segmentation method is based on using a wavelet decomposition and a self-organizing neural network. The result is a predetermined number of extended regions plus a small region for each star or bright object. To assign a different value of the hyperparameter to each extended region and star, we use either feasibility tests or cross-validation methods. Once the set of hyperparameters is obtained, we carried out the final Bayesian reconstruction, leading to a reconstruction with decreased bias and excellent visual characteristics. The method has been applied to data from the non-refurbished Hubble Space Telescope. The method can be also applied to ground-based images.
Resumo:
Summary The specific CD8+ T cell immune response against tumors relies on the recognition by the T cell receptor (TCR) on cytotoxic T lymphocytes (CTL) of antigenic peptides bound to the class I major histocompatibility complex (MHC) molecule. Such tumor associated antigenic peptides are the focus of tumor immunotherapy with peptide vaccines. The strategy for obtaining an improved immune response often involves the design of modified tumor associated antigenic peptides. Such modifications aim at creating higher affinity and/or degradation resistant peptides and require precise structures of the peptide-MHC class I complex. In addition, the modified peptide must be cross-recognized by CTLs specific for the parental peptide, i.e. preserve the structure of the epitope. Detailed structural information on the modified peptide in complex with MHC is necessary for such predictions. In this thesis, the main focus is the development of theoretical in silico methods for prediction of both structure and cross-reactivity of peptide-MHC class I complexes. Applications of these methods in the context of immunotherapy are also presented. First, a theoretical method for structure prediction of peptide-MHC class I complexes is developed and validated. The approach is based on a molecular dynamics protocol to sample the conformational space of the peptide in its MHC environment. The sampled conformers are evaluated using conformational free energy calculations. The method, which is evaluated for its ability to reproduce 41 X-ray crystallographic structures of different peptide-MHC class I complexes, shows an overall prediction success of 83%. Importantly, in the clinically highly relevant subset of peptide-HLAA*0201 complexes, the prediction success is 100%. Based on these structure predictions, a theoretical approach for prediction of cross-reactivity is developed and validated. This method involves the generation of quantitative structure-activity relationships using three-dimensional molecular descriptors and a genetic neural network. The generated relationships are highly predictive as proved by high cross-validated correlation coefficients (0.78-0.79). Together, the here developed theoretical methods open the door for efficient rational design of improved peptides to be used in immunotherapy. Résumé La réponse immunitaire spécifique contre des tumeurs dépend de la reconnaissance par les récepteurs des cellules T CD8+ de peptides antigéniques présentés par les complexes majeurs d'histocompatibilité (CMH) de classe I. Ces peptides sont utilisés comme cible dans l'immunothérapie par vaccins peptidiques. Afin d'augmenter la réponse immunitaire, les peptides sont modifiés de façon à améliorer l'affinité et/ou la résistance à la dégradation. Ceci nécessite de connaître la structure tridimensionnelle des complexes peptide-CMH. De plus, les peptides modifiés doivent être reconnus par des cellules T spécifiques du peptide natif. La structure de l'épitope doit donc être préservée et des structures détaillées des complexes peptide-CMH sont nécessaires. Dans cette thèse, le thème central est le développement des méthodes computationnelles de prédiction des structures des complexes peptide-CMH classe I et de la reconnaissance croisée. Des applications de ces méthodes de prédiction à l'immunothérapie sont également présentées. Premièrement, une méthode théorique de prédiction des structures des complexes peptide-CMH classe I est développée et validée. Cette méthode est basée sur un échantillonnage de l'espace conformationnel du peptide dans le contexte du récepteur CMH classe I par dynamique moléculaire. Les conformations sont évaluées par leurs énergies libres conformationnelles. La méthode est validée par sa capacité à reproduire 41 structures des complexes peptide-CMH classe I obtenues par cristallographie aux rayons X. Le succès prédictif général est de 83%. Pour le sous-groupe HLA-A*0201 de complexes de grande importance pour l'immunothérapie, ce succès est de 100%. Deuxièmement, à partir de ces structures prédites in silico, une méthode théorique de prédiction de la reconnaissance croisée est développée et validée. Celle-ci consiste à générer des relations structure-activité quantitatives en utilisant des descripteurs moléculaires tridimensionnels et un réseau de neurones couplé à un algorithme génétique. Les relations générées montrent une capacité de prédiction remarquable avec des valeurs de coefficients de corrélation de validation croisée élevées (0.78-0.79). Les méthodes théoriques développées dans le cadre de cette thèse ouvrent la voie du design de vaccins peptidiques améliorés.
Resumo:
The research considers the problem of spatial data classification using machine learning algorithms: probabilistic neural networks (PNN) and support vector machines (SVM). As a benchmark model simple k-nearest neighbor algorithm is considered. PNN is a neural network reformulation of well known nonparametric principles of probability density modeling using kernel density estimator and Bayesian optimal or maximum a posteriori decision rules. PNN is well suited to problems where not only predictions but also quantification of accuracy and integration of prior information are necessary. An important property of PNN is that they can be easily used in decision support systems dealing with problems of automatic classification. Support vector machine is an implementation of the principles of statistical learning theory for the classification tasks. Recently they were successfully applied for different environmental topics: classification of soil types and hydro-geological units, optimization of monitoring networks, susceptibility mapping of natural hazards. In the present paper both simulated and real data case studies (low and high dimensional) are considered. The main attention is paid to the detection and learning of spatial patterns by the algorithms applied.
Resumo:
No presente estudo, foi realizada uma avaliação de diferentes variáveis ambientais no mapeamento digital de solos em uma região no norte do Estado de Minas Gerais, utilizando redes neurais artificiais (RNA). Os atributos do terreno declividade e índice topográfico combinado (CTI), derivados de um modelo digital de elevação, três bandas do sensor Quickbird e um mapa de litologia foram combinados, e a importância de cada variável para discriminação das unidades de mapeamento foi avaliada. O simulador de redes neurais utilizado foi o "Java Neural Network Simulator", e o algoritmo de aprendizado, o "backpropagation". Para cada conjunto testado, foi selecionada uma RNA para a predição das unidades de mapeamento; os mapas gerados por esses conjuntos foram comparados com um mapa de solos produzido com o método convencional, para determinação da concordância entre as classificações. Essa comparação mostrou que o mapa produzido com o uso de todas as variáveis ambientais (declividade, índice CTI, bandas 1, 2 e 3 do Quickbird e litologia) obteve desempenho superior (67,4 % de concordância) ao dos mapas produzidos pelos demais conjuntos de variáveis. Das variáveis utilizadas, a declividade foi a que contribuiu com maior peso, pois, quando suprimida da análise, os resultados da concordância foram os mais baixos (33,7 %). Os resultados demonstraram que a abordagem utilizada pode contribuir para superar alguns dos problemas do mapeamento de solos no Brasil, especialmente em escalas maiores que 1:25.000, tornando sua execução mais rápida e mais barata, sobretudo se houver disponibilidade de dados de sensores remotos de alta resolução espacial a custos mais baixos e facilidade de obtenção dos atributos do terreno nos sistemas de informação geográfica (SIG).
Resumo:
The proportion of population living in or around cites is more important than ever. Urban sprawl and car dependence have taken over the pedestrian-friendly compact city. Environmental problems like air pollution, land waste or noise, and health problems are the result of this still continuing process. The urban planners have to find solutions to these complex problems, and at the same time insure the economic performance of the city and its surroundings. At the same time, an increasing quantity of socio-economic and environmental data is acquired. In order to get a better understanding of the processes and phenomena taking place in the complex urban environment, these data should be analysed. Numerous methods for modelling and simulating such a system exist and are still under development and can be exploited by the urban geographers for improving our understanding of the urban metabolism. Modern and innovative visualisation techniques help in communicating the results of such models and simulations. This thesis covers several methods for analysis, modelling, simulation and visualisation of problems related to urban geography. The analysis of high dimensional socio-economic data using artificial neural network techniques, especially self-organising maps, is showed using two examples at different scales. The problem of spatiotemporal modelling and data representation is treated and some possible solutions are shown. The simulation of urban dynamics and more specifically the traffic due to commuting to work is illustrated using multi-agent micro-simulation techniques. A section on visualisation methods presents cartograms for transforming the geographic space into a feature space, and the distance circle map, a centre-based map representation particularly useful for urban agglomerations. Some issues on the importance of scale in urban analysis and clustering of urban phenomena are exposed. A new approach on how to define urban areas at different scales is developed, and the link with percolation theory established. Fractal statistics, especially the lacunarity measure, and scale laws are used for characterising urban clusters. In a last section, the population evolution is modelled using a model close to the well-established gravity model. The work covers quite a wide range of methods useful in urban geography. Methods should still be developed further and at the same time find their way into the daily work and decision process of urban planners. La part de personnes vivant dans une région urbaine est plus élevé que jamais et continue à croître. L'étalement urbain et la dépendance automobile ont supplanté la ville compacte adaptée aux piétons. La pollution de l'air, le gaspillage du sol, le bruit, et des problèmes de santé pour les habitants en sont la conséquence. Les urbanistes doivent trouver, ensemble avec toute la société, des solutions à ces problèmes complexes. En même temps, il faut assurer la performance économique de la ville et de sa région. Actuellement, une quantité grandissante de données socio-économiques et environnementales est récoltée. Pour mieux comprendre les processus et phénomènes du système complexe "ville", ces données doivent être traitées et analysées. Des nombreuses méthodes pour modéliser et simuler un tel système existent et sont continuellement en développement. Elles peuvent être exploitées par le géographe urbain pour améliorer sa connaissance du métabolisme urbain. Des techniques modernes et innovatrices de visualisation aident dans la communication des résultats de tels modèles et simulations. Cette thèse décrit plusieurs méthodes permettant d'analyser, de modéliser, de simuler et de visualiser des phénomènes urbains. L'analyse de données socio-économiques à très haute dimension à l'aide de réseaux de neurones artificiels, notamment des cartes auto-organisatrices, est montré à travers deux exemples aux échelles différentes. Le problème de modélisation spatio-temporelle et de représentation des données est discuté et quelques ébauches de solutions esquissées. La simulation de la dynamique urbaine, et plus spécifiquement du trafic automobile engendré par les pendulaires est illustrée à l'aide d'une simulation multi-agents. Une section sur les méthodes de visualisation montre des cartes en anamorphoses permettant de transformer l'espace géographique en espace fonctionnel. Un autre type de carte, les cartes circulaires, est présenté. Ce type de carte est particulièrement utile pour les agglomérations urbaines. Quelques questions liées à l'importance de l'échelle dans l'analyse urbaine sont également discutées. Une nouvelle approche pour définir des clusters urbains à des échelles différentes est développée, et le lien avec la théorie de la percolation est établi. Des statistiques fractales, notamment la lacunarité, sont utilisées pour caractériser ces clusters urbains. L'évolution de la population est modélisée à l'aide d'un modèle proche du modèle gravitaire bien connu. Le travail couvre une large panoplie de méthodes utiles en géographie urbaine. Toutefois, il est toujours nécessaire de développer plus loin ces méthodes et en même temps, elles doivent trouver leur chemin dans la vie quotidienne des urbanistes et planificateurs.
Resumo:
A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically infeasible, even in simpler systems like dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct structural connectivity from network activity monitored through calcium imaging. We focus in this study on the inference of excitatory synaptic links. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the functional network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (bursting or non-bursting). Thus by conditioning with respect to the global mean activity, we improve the performance of our method. This allows us to focus the analysis to specific dynamical regimes of the network in which the inferred functional connectivity is shaped by monosynaptic excitatory connections, rather than by collective synchrony. Our method can discriminate between actual causal influences between neurons and spurious non-causal correlations due to light scattering artifacts, which inherently affect the quality of fluorescence imaging. Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good estimation of the excitatory network clustering coefficient, allowing for discrimination between weakly and strongly clustered topologies. Finally, we demonstrate the applicability of our method to analyses of real recordings of in vitro disinhibited cortical cultures where we suggest that excitatory connections are characterized by an elevated level of clustering compared to a random graph (although not extreme) and can be markedly non-local.
Resumo:
The paper deals with the development and application of the generic methodology for automatic processing (mapping and classification) of environmental data. General Regression Neural Network (GRNN) is considered in detail and is proposed as an efficient tool to solve the problem of spatial data mapping (regression). The Probabilistic Neural Network (PNN) is considered as an automatic tool for spatial classifications. The automatic tuning of isotropic and anisotropic GRNN/PNN models using cross-validation procedure is presented. Results are compared with the k-Nearest-Neighbours (k-NN) interpolation algorithm using independent validation data set. Real case studies are based on decision-oriented mapping and classification of radioactively contaminated territories.