922 resultados para neural classification
Resumo:
The paper presents a novel method for monitoring network optimisation, based on a recent machine learning technique known as support vector machine. It is problem-oriented in the sense that it directly answers the question of whether the advised spatial location is important for the classification model. The method can be used to increase the accuracy of classification models by taking a small number of additional measurements. Traditionally, network optimisation is performed by means of the analysis of the kriging variances. The comparison of the method with the traditional approach is presented on a real case study with climate data.
Resumo:
Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.
Resumo:
This document Classifications and Pay Plans is produced by the State of Iowa Executive Branch, Department of Administrative Services. Informational document about the pay plan codes and classification codes, how to use them.
Resumo:
An exhaustive classification of matrix effects occurring when a sample preparation is performed prior to liquid-chromatography coupled to mass spectrometry (LC-MS) analyses was proposed. A total of eight different situations were identified allowing the recognition of the matrix effect typology via the calculation of four recovery values. A set of 198 compounds was used to evaluate matrix effects after solid phase extraction (SPE) from plasma or urine samples prior to LC-ESI-MS analysis. Matrix effect identification was achieved for all compounds and classified through an organization chart. Only 17% of the tested compounds did not present significant matrix effects.
Resumo:
Early detection of neural-tude defects is possible by determining Alpha-fetoprotein (AFP) in maternal serum. 16'685 pregnant women were observed. Three methods for the determination of the "normal" range are compared. The first one, already used in similar studies, makes use of a constant multiple of the median. The other two ones make use of robust estimates of location and scale. Their comparison shows the interest of the robust methods to reduce the interlaboratory variability.
Resumo:
The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.
Resumo:
Newborn neurons are generated in the adult hippocampus from a pool of self-renewing stem cells located in the subgranular zone (SGZ) of the dentate gyrus. Their activation, proliferation, and maturation depend on a host of environmental and cellular factors but, until recently, the contribution of local neuronal circuitry to this process was relatively unknown. In their recent publication, Song and colleagues have uncovered a novel circuit-based mechanism by which release of the neurotransmitter, γ-aminobutyric acid (GABA), from parvalbumin-expressing (PV) interneurons, can hold radial glia-like (RGL) stem cells of the adult SGZ in a quiescent state. This tonic GABAergic signal, dependent upon the activation of γ(2) subunit-containing GABA(A) receptors of RGL stem cells, can thus prevent their proliferation and subsequent maturation or return them to quiescence if previously activated. PV interneurons are thus capable of suppressing neurogenesis during periods of high network activity and facilitating neurogenesis when network activity is low.
Resumo:
Background: The coagulation factor thrombin mediates ischemic neuronal deathand, at a low concentration, induces tolerance to ischemia.We investigated its modeof activation in ischemic neural tissue using an in vitro approach to distinguish therole of circulating coagulation factors from endogenous cerebral mechanisms. Wealso studied the signalling pathway downstream of thrombin in ischemia and afterthrombin preconditioning.Methods: Rat organotypic hippocampal slice cultures to 30 minute oxygen (5%)and glucose (1 mmol/L) deprivation (OGD).Results: Selective factor Xa (FXa) inhibition by fondaparinux during and afterOGD significantly reduced neuronal death in the CA1 after 48 hours. Thrombinactivity was increased in the medium 24 hours after OGD and this increasewas prevented by fondaparinux suggesting that FXa catalyzes the conversion ofprothrombin to thrombin in neural tissue after ischemia in vitro. Treatment withSCH79797, a selective antagonist of the thrombin receptor protease activatedreceptor-1 (PAR-1), significantly decreased neuronal cell death indicating thatthrombin signals ischemic damage via PAR-1. The JNK pathway plays an importantrole in cerebral ischemia and we observed activation of the JNK substrate,c-Jun in our model. Both the FXa inhibitor, fondaparinux and the PAR-1 antagonistSCH79797, decreased the level of phospho-c-Jun Ser73. After thrombin preconditioningc-Jun was activated by phosphorylation in the nuclei of neurons of the CA1.Treatment with a synthetic thrombin receptor agonist resulted in the same c-Junactivation profile and protection against subsequent OGD indicating that thrombinalso signals via PAR-1 and c-Jun in cell protection.Conclusion: These results indicate that FXa activates thrombin in cerebral ischemia,leading via PAR-1 to the activation of the JNK pathway resulting in neuronal death.Thrombin induced tolerance also involves PAR-1 and JNK, revealing commonfeatures in cell death and survival signalling.
Resumo:
Embryonic stem cells (ESCs) offer attractive prospective as potential source of neurons for cell replacement therapy in human neurodegenerative diseases. Besides, ESCs neural differentiation enables in vitro tissue engineering for fundamental research and drug discovery aimed at the nervous system. We have established stable and long-term three-dimensional (3D) culture conditions which can be used to model long latency and complex neurodegenerative diseases. Mouse ESCs-derived neural progenitor cells generated by MS5 stromal cells induction, result in strictly neural 3D cultures of about 120-mum thick, whose cells expressed mature neuronal, astrocytes and myelin markers. Neurons were from the glutamatergic and gabaergic lineages. This nervous tissue was spatially organized in specific layers resembling brain sub-ependymal (SE) nervous tissue, and was maintained in vitro for at least 3.5 months with great stability. Electron microscopy showed the presence of mature synapses and myelinated axons, suggesting functional maturation. Electrophysiological activity revealed biological signals involving action potential propagation along neuronal fibres and synaptic-like release of neurotransmitters. The rapid development and stabilization of this 3D cultures model result in an abundant and long-lasting production that is compatible with multiple and productive investigations for neurodegenerative diseases modeling, drug and toxicology screening, stress and aging research.
Resumo:
Axée dans un premier temps sur le formalisme et les méthodes, cette thèse est construite sur trois concepts formalisés: une table de contingence, une matrice de dissimilarités euclidiennes et une matrice d'échange. À partir de ces derniers, plusieurs méthodes d'Analyse des données ou d'apprentissage automatique sont exprimées et développées: l'analyse factorielle des correspondances (AFC), vue comme un cas particulier du multidimensional scaling; la classification supervisée, ou non, combinée aux transformations de Schoenberg; et les indices d'autocorrélation et d'autocorrélation croisée, adaptés à des analyses multivariées et permettant de considérer diverses familles de voisinages. Ces méthodes débouchent dans un second temps sur une pratique de l'analyse exploratoire de différentes données textuelles et musicales. Pour les données textuelles, on s'intéresse à la classification automatique en types de discours de propositions énoncées, en se basant sur les catégories morphosyntaxiques (CMS) qu'elles contiennent. Bien que le lien statistique entre les CMS et les types de discours soit confirmé, les résultats de la classification obtenus avec la méthode K- means, combinée à une transformation de Schoenberg, ainsi qu'avec une variante floue de l'algorithme K-means, sont plus difficiles à interpréter. On traite aussi de la classification supervisée multi-étiquette en actes de dialogue de tours de parole, en se basant à nouveau sur les CMS qu'ils contiennent, mais aussi sur les lemmes et le sens des verbes. Les résultats obtenus par l'intermédiaire de l'analyse discriminante combinée à une transformation de Schoenberg sont prometteurs. Finalement, on examine l'autocorrélation textuelle, sous l'angle des similarités entre diverses positions d'un texte, pensé comme une séquence d'unités. En particulier, le phénomène d'alternance de la longueur des mots dans un texte est observé pour des voisinages d'empan variable. On étudie aussi les similarités en fonction de l'apparition, ou non, de certaines parties du discours, ainsi que les similarités sémantiques des diverses positions d'un texte. Concernant les données musicales, on propose une représentation d'une partition musicale sous forme d'une table de contingence. On commence par utiliser l'AFC et l'indice d'autocorrélation pour découvrir les structures existant dans chaque partition. Ensuite, on opère le même type d'approche sur les différentes voix d'une partition, grâce à l'analyse des correspondances multiples, dans une variante floue, et à l'indice d'autocorrélation croisée. Qu'il s'agisse de la partition complète ou des différentes voix qu'elle contient, des structures répétées sont effectivement détectées, à condition qu'elles ne soient pas transposées. Finalement, on propose de classer automatiquement vingt partitions de quatre compositeurs différents, chacune représentée par une table de contingence, par l'intermédiaire d'un indice mesurant la similarité de deux configurations. Les résultats ainsi obtenus permettent de regrouper avec succès la plupart des oeuvres selon leur compositeur.
Resumo:
In forensic pathology routine, fatal cases of contrast agent exposure can be occasionally encountered. In such situations, beyond the difficulties inherent in establishing the cause of death due to nonspecific or absent autopsy and histology findings as well as limited laboratory investigations, pathologists may face other problems in formulating exhaustive, complete reports, and conclusions that are scientifically accurate. Indeed, terminology concerning adverse drug reactions and allergy nomenclature is confusing. Some terms, still utilized in forensic and radiological reports, are outdated and should be avoided. Additionally, not all forensic pathologists master contrast material classification and pathogenesis of contrast agent reactions. We present a review of the literature covering allergic reactions to contrast material exposure in order to update used terminology, explain the pathophysiology, and list currently available laboratory investigations for diagnosis in the forensic setting.
Resumo:
Rhythmic activity plays a central role in neural computations and brain functions ranging from homeostasis to attention, as well as in neurological and neuropsychiatric disorders. Despite this pervasiveness, little is known about the mechanisms whereby the frequency and power of oscillatory activity are modulated, and how they reflect the inputs received by neurons. Numerous studies have reported input-dependent fluctuations in peak frequency and power (as well as couplings across these features). However, it remains unresolved what mediates these spectral shifts among neural populations. Extending previous findings regarding stochastic nonlinear systems and experimental observations, we provide analytical insights regarding oscillatory responses of neural populations to stimulation from either endogenous or exogenous origins. Using a deceptively simple yet sparse and randomly connected network of neurons, we show how spiking inputs can reliably modulate the peak frequency and power expressed by synchronous neural populations without any changes in circuitry. Our results reveal that a generic, non-nonlinear and input-induced mechanism can robustly mediate these spectral fluctuations, and thus provide a framework in which inputs to the neurons bidirectionally regulate both the frequency and power expressed by synchronous populations. Theoretical and computational analysis of the ensuing spectral fluctuations was found to reflect the underlying dynamics of the input stimuli driving the neurons. Our results provide insights regarding a generic mechanism supporting spectral transitions observed across cortical networks and spanning multiple frequency bands.