806 resultados para Generalized regression neural network
Resumo:
Background and objective: Cefepime was one of the most used broad-spectrum antibiotics in Swiss public acute care hospitals. The drug was withdrawn from market in January 2007, and then replaced by a generic since October 2007. The goal of the study was to evaluate changes in the use of broad-spectrum antibiotics after the withdrawal of the cefepime original product. Design: A generalized regression-based interrupted time series model incorporating autocorrelated errors assessed how much the withdrawal changed the monthly use of other broad-spectrum antibiotics (ceftazidime, imipenem/cilastin, meropenem, piperacillin/ tazobactam) in defined daily doses (DDD)/100 bed-days from January 2004 to December 2008 [1, 2]. Setting: 10 Swiss public acute care hospitals (7 with\200 beds, 3 with 200-500 beds). Nine hospitals (group A) had a shortage of cefepime and 1 hospital had no shortage thanks to importation of cefepime from abroad. Main outcome measures: Underlying trend of use before the withdrawal, and changes in the level and in the trend of use after the withdrawal. Results: Before the withdrawal, the average estimated underlying trend (coefficient b1) for cefepime was decreasing by -0.047 (95% CI -0.086, -0.009) DDD/100 bed-days per month and was significant in three hospitals (group A, P\0.01). Cefepime withdrawal was associated with a significant increase in level of use (b2) of piperacillin/tazobactam and imipenem/cilastin in, respectively, one and five hospitals from group A. After the withdrawal, the average estimated trend (b3) was greatest for piperacillin/tazobactam (+0.043 DDD/100 bed-days per month; 95% CI -0.001, 0.089) and was significant in four hospitals from group A (P\0.05). The hospital without drug shortage showed no significant change in the trend and the level of use. The hypothesis of seasonality was rejected in all hospitals. Conclusions: The decreased use of cefepime already observed before its withdrawal from the market could be explained by pre-existing difficulty in drug supply. The withdrawal of cefepime resulted in change in level for piperacillin/tazobactam and imipenem/cilastin. Moreover, an increase in trend was found for piperacillin/tazobactam thereafter. As these changes generally occur at the price of lower bacterial susceptibility, a manufacturers' commitment to avoid shortages in the supply of their products would be important. As perspectives, we will measure the impact of the changes in cost and sensitivity rates of these antibiotics.
Resumo:
The research considers the problem of spatial data classification using machine learning algorithms: probabilistic neural networks (PNN) and support vector machines (SVM). As a benchmark model simple k-nearest neighbor algorithm is considered. PNN is a neural network reformulation of well known nonparametric principles of probability density modeling using kernel density estimator and Bayesian optimal or maximum a posteriori decision rules. PNN is well suited to problems where not only predictions but also quantification of accuracy and integration of prior information are necessary. An important property of PNN is that they can be easily used in decision support systems dealing with problems of automatic classification. Support vector machine is an implementation of the principles of statistical learning theory for the classification tasks. Recently they were successfully applied for different environmental topics: classification of soil types and hydro-geological units, optimization of monitoring networks, susceptibility mapping of natural hazards. In the present paper both simulated and real data case studies (low and high dimensional) are considered. The main attention is paid to the detection and learning of spatial patterns by the algorithms applied.
Resumo:
No presente estudo, foi realizada uma avaliação de diferentes variáveis ambientais no mapeamento digital de solos em uma região no norte do Estado de Minas Gerais, utilizando redes neurais artificiais (RNA). Os atributos do terreno declividade e índice topográfico combinado (CTI), derivados de um modelo digital de elevação, três bandas do sensor Quickbird e um mapa de litologia foram combinados, e a importância de cada variável para discriminação das unidades de mapeamento foi avaliada. O simulador de redes neurais utilizado foi o "Java Neural Network Simulator", e o algoritmo de aprendizado, o "backpropagation". Para cada conjunto testado, foi selecionada uma RNA para a predição das unidades de mapeamento; os mapas gerados por esses conjuntos foram comparados com um mapa de solos produzido com o método convencional, para determinação da concordância entre as classificações. Essa comparação mostrou que o mapa produzido com o uso de todas as variáveis ambientais (declividade, índice CTI, bandas 1, 2 e 3 do Quickbird e litologia) obteve desempenho superior (67,4 % de concordância) ao dos mapas produzidos pelos demais conjuntos de variáveis. Das variáveis utilizadas, a declividade foi a que contribuiu com maior peso, pois, quando suprimida da análise, os resultados da concordância foram os mais baixos (33,7 %). Os resultados demonstraram que a abordagem utilizada pode contribuir para superar alguns dos problemas do mapeamento de solos no Brasil, especialmente em escalas maiores que 1:25.000, tornando sua execução mais rápida e mais barata, sobretudo se houver disponibilidade de dados de sensores remotos de alta resolução espacial a custos mais baixos e facilidade de obtenção dos atributos do terreno nos sistemas de informação geográfica (SIG).
Resumo:
The proportion of population living in or around cites is more important than ever. Urban sprawl and car dependence have taken over the pedestrian-friendly compact city. Environmental problems like air pollution, land waste or noise, and health problems are the result of this still continuing process. The urban planners have to find solutions to these complex problems, and at the same time insure the economic performance of the city and its surroundings. At the same time, an increasing quantity of socio-economic and environmental data is acquired. In order to get a better understanding of the processes and phenomena taking place in the complex urban environment, these data should be analysed. Numerous methods for modelling and simulating such a system exist and are still under development and can be exploited by the urban geographers for improving our understanding of the urban metabolism. Modern and innovative visualisation techniques help in communicating the results of such models and simulations. This thesis covers several methods for analysis, modelling, simulation and visualisation of problems related to urban geography. The analysis of high dimensional socio-economic data using artificial neural network techniques, especially self-organising maps, is showed using two examples at different scales. The problem of spatiotemporal modelling and data representation is treated and some possible solutions are shown. The simulation of urban dynamics and more specifically the traffic due to commuting to work is illustrated using multi-agent micro-simulation techniques. A section on visualisation methods presents cartograms for transforming the geographic space into a feature space, and the distance circle map, a centre-based map representation particularly useful for urban agglomerations. Some issues on the importance of scale in urban analysis and clustering of urban phenomena are exposed. A new approach on how to define urban areas at different scales is developed, and the link with percolation theory established. Fractal statistics, especially the lacunarity measure, and scale laws are used for characterising urban clusters. In a last section, the population evolution is modelled using a model close to the well-established gravity model. The work covers quite a wide range of methods useful in urban geography. Methods should still be developed further and at the same time find their way into the daily work and decision process of urban planners. La part de personnes vivant dans une région urbaine est plus élevé que jamais et continue à croître. L'étalement urbain et la dépendance automobile ont supplanté la ville compacte adaptée aux piétons. La pollution de l'air, le gaspillage du sol, le bruit, et des problèmes de santé pour les habitants en sont la conséquence. Les urbanistes doivent trouver, ensemble avec toute la société, des solutions à ces problèmes complexes. En même temps, il faut assurer la performance économique de la ville et de sa région. Actuellement, une quantité grandissante de données socio-économiques et environnementales est récoltée. Pour mieux comprendre les processus et phénomènes du système complexe "ville", ces données doivent être traitées et analysées. Des nombreuses méthodes pour modéliser et simuler un tel système existent et sont continuellement en développement. Elles peuvent être exploitées par le géographe urbain pour améliorer sa connaissance du métabolisme urbain. Des techniques modernes et innovatrices de visualisation aident dans la communication des résultats de tels modèles et simulations. Cette thèse décrit plusieurs méthodes permettant d'analyser, de modéliser, de simuler et de visualiser des phénomènes urbains. L'analyse de données socio-économiques à très haute dimension à l'aide de réseaux de neurones artificiels, notamment des cartes auto-organisatrices, est montré à travers deux exemples aux échelles différentes. Le problème de modélisation spatio-temporelle et de représentation des données est discuté et quelques ébauches de solutions esquissées. La simulation de la dynamique urbaine, et plus spécifiquement du trafic automobile engendré par les pendulaires est illustrée à l'aide d'une simulation multi-agents. Une section sur les méthodes de visualisation montre des cartes en anamorphoses permettant de transformer l'espace géographique en espace fonctionnel. Un autre type de carte, les cartes circulaires, est présenté. Ce type de carte est particulièrement utile pour les agglomérations urbaines. Quelques questions liées à l'importance de l'échelle dans l'analyse urbaine sont également discutées. Une nouvelle approche pour définir des clusters urbains à des échelles différentes est développée, et le lien avec la théorie de la percolation est établi. Des statistiques fractales, notamment la lacunarité, sont utilisées pour caractériser ces clusters urbains. L'évolution de la population est modélisée à l'aide d'un modèle proche du modèle gravitaire bien connu. Le travail couvre une large panoplie de méthodes utiles en géographie urbaine. Toutefois, il est toujours nécessaire de développer plus loin ces méthodes et en même temps, elles doivent trouver leur chemin dans la vie quotidienne des urbanistes et planificateurs.
Resumo:
A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically infeasible, even in simpler systems like dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct structural connectivity from network activity monitored through calcium imaging. We focus in this study on the inference of excitatory synaptic links. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the functional network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (bursting or non-bursting). Thus by conditioning with respect to the global mean activity, we improve the performance of our method. This allows us to focus the analysis to specific dynamical regimes of the network in which the inferred functional connectivity is shaped by monosynaptic excitatory connections, rather than by collective synchrony. Our method can discriminate between actual causal influences between neurons and spurious non-causal correlations due to light scattering artifacts, which inherently affect the quality of fluorescence imaging. Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good estimation of the excitatory network clustering coefficient, allowing for discrimination between weakly and strongly clustered topologies. Finally, we demonstrate the applicability of our method to analyses of real recordings of in vitro disinhibited cortical cultures where we suggest that excitatory connections are characterized by an elevated level of clustering compared to a random graph (although not extreme) and can be markedly non-local.
Resumo:
Abstract : This work is concerned with the development and application of novel unsupervised learning methods, having in mind two target applications: the analysis of forensic case data and the classification of remote sensing images. First, a method based on a symbolic optimization of the inter-sample distance measure is proposed to improve the flexibility of spectral clustering algorithms, and applied to the problem of forensic case data. This distance is optimized using a loss function related to the preservation of neighborhood structure between the input space and the space of principal components, and solutions are found using genetic programming. Results are compared to a variety of state-of--the-art clustering algorithms. Subsequently, a new large-scale clustering method based on a joint optimization of feature extraction and classification is proposed and applied to various databases, including two hyperspectral remote sensing images. The algorithm makes uses of a functional model (e.g., a neural network) for clustering which is trained by stochastic gradient descent. Results indicate that such a technique can easily scale to huge databases, can avoid the so-called out-of-sample problem, and can compete with or even outperform existing clustering algorithms on both artificial data and real remote sensing images. This is verified on small databases as well as very large problems. Résumé : Ce travail de recherche porte sur le développement et l'application de méthodes d'apprentissage dites non supervisées. Les applications visées par ces méthodes sont l'analyse de données forensiques et la classification d'images hyperspectrales en télédétection. Dans un premier temps, une méthodologie de classification non supervisée fondée sur l'optimisation symbolique d'une mesure de distance inter-échantillons est proposée. Cette mesure est obtenue en optimisant une fonction de coût reliée à la préservation de la structure de voisinage d'un point entre l'espace des variables initiales et l'espace des composantes principales. Cette méthode est appliquée à l'analyse de données forensiques et comparée à un éventail de méthodes déjà existantes. En second lieu, une méthode fondée sur une optimisation conjointe des tâches de sélection de variables et de classification est implémentée dans un réseau de neurones et appliquée à diverses bases de données, dont deux images hyperspectrales. Le réseau de neurones est entraîné à l'aide d'un algorithme de gradient stochastique, ce qui rend cette technique applicable à des images de très haute résolution. Les résultats de l'application de cette dernière montrent que l'utilisation d'une telle technique permet de classifier de très grandes bases de données sans difficulté et donne des résultats avantageusement comparables aux méthodes existantes.
Resumo:
Brain fluctuations at rest are not random but are structured in spatial patterns of correlated activity across different brain areas. The question of how resting-state functional connectivity (FC) emerges from the brain's anatomical connections has motivated several experimental and computational studies to understand structure-function relationships. However, the mechanistic origin of resting state is obscured by large-scale models' complexity, and a close structure-function relation is still an open problem. Thus, a realistic but simple enough description of relevant brain dynamics is needed. Here, we derived a dynamic mean field model that consistently summarizes the realistic dynamics of a detailed spiking and conductance-based synaptic large-scale network, in which connectivity is constrained by diffusion imaging data from human subjects. The dynamic mean field approximates the ensemble dynamics, whose temporal evolution is dominated by the longest time scale of the system. With this reduction, we demonstrated that FC emerges as structured linear fluctuations around a stable low firing activity state close to destabilization. Moreover, the model can be further and crucially simplified into a set of motion equations for statistical moments, providing a direct analytical link between anatomical structure, neural network dynamics, and FC. Our study suggests that FC arises from noise propagation and dynamical slowing down of fluctuations in an anatomically constrained dynamical system. Altogether, the reduction from spiking models to statistical moments presented here provides a new framework to explicitly understand the building up of FC through neuronal dynamics underpinned by anatomical connections and to drive hypotheses in task-evoked studies and for clinical applications.
Resumo:
The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.
Resumo:
Vulnerability and psychic illness Based on a sample of 1701 college and university students from four different sites in Switzerland, the U.S., and Argentina, this study investigated the interrelationships between insufficient coping skills under chronic stress and impaired general health. We sought to develop standardised means for "early" identification of students at risk of mental health problems, as these students may benefit from "early" interventions before psychiatric symptoms develop and reach clinically relevant thresholds. All students completed two self-report questionnaires: the Coping Strategies Inventory "COPE" and the Zurich Health Questionnaire "ZHQ", with the latter assessing "regular exercises", "consumption behavior", "impaired physical health", "psychosomatic disturbances", and "impaired mental health". This data was subjected to structure analyses based on neural network approaches, using the different study sites' data subsets as independent "learning" and "test" samples. We found two highly stable COPE scales that quantified basic coping behaviour in terms of "activity-passivity" and "defeatism-resilience". The excellent reproducibility across study sites suggested that the new scales characterise socioculturally independent personality traits. Correlation analyses for external validation revealed a close relationship between high scores on the defeatism scale and impaired physical and mental health, hence underlining the scales' clinical relevance. Our results suggested in particular: (1.) the proposed method to be a powerful screening tool for early detection and prevention of psychiatric disorders; (2.) physical activity like regular exercises to play a critical role not only in preventing health problems but also in contributing to early intervention programs.
Resumo:
RESUMENeurones transitoires jouant un rôle de cibles intermédiaires dans le guidage des axones du corps calleuxLe guidage axonal est une étape clé permettant aux neurones d'établir des connexions synaptiques et de s'intégrer dans un réseau neural fonctionnel de manière spécifique. Des cellules-cibles intermédiaires appelées « guidepost » aident les axones à parcourir de longues distances dans le cerveau en leur fournissant des informations directionnelles tout au long de leur trajet. Il a été démontré que des sous-populations de cellules gliales au niveau de la ligne médiane guident les axones du corps calleux (CC) d'un hémisphère vers l'autre. Bien qu'il fût observé que le CC en développement contenait aussi des neurones, leur rôle était resté jusqu'alors inconnu.La publication de nos résultats a montré que pendant le développement embryonnaire, le CC contient des glies mais aussi un nombre considérable de neurones glutamatergiques et GABAergiques, nécessaires à la formation du corps calleux (Niquille et al., PLoS Biology, 2009). Dans ce travail, j'ai utilisé des techniques de morphologie et d'imagerie confocale 3D pour définir le cadre neuro-anatomique de notre modèle. De plus, à l'aide de transplantations sur tranches in vitro, de co-explants, d'expression de siRNA dans des cultures de neurones primaires et d'analyse in vivo sur des souris knock-out, nous avons démontré que les neurones du CC guident les axones callosaux en partie grâce à l'action attractive du facteur de guidage Sema3C sur son récepteur Npn- 1.Récemment, nous avons étudié l'origine, les aspects dynamiques de ces processus, ainsi que les mécanismes moléculaires impliqués dans la mise en place de ce faisceau axonal (Niquille et al., soumis). Tout d'abord, nous avons précisé l'origine et l'identité des neurones guidepost GABAergiques du CC par une étude approfondie de traçage génétique in vivo. J'ai identifié, dans le CC, deux populations distinctes de neurones GABAergiques venant des éminences ganglionnaires médiane (MGE) et caudale (CGE). J'ai ensuite étudié plus en détail les interactions dynamiques entre neurones et axones du corps calleux par microscopie confocale en temps réel. Puis nous avons défini le rôle de chaque sous-population neuronale dans le guidage des axones callosaux et de manière intéressante les neurones GABAergic dérivés de la MGE comme ceux de la CGE se sont révélés avoir une action attractive pour les axones callosaux dans des expériences de transplantation. Enfin, nous avons clarifié la base moléculaire de ces mécanismes de guidage par FACS sorting associé à un large criblage génétique de molécules d'intérêt par une technique très sensible de RT-PCR et ensuite ces résultats ont été validés par hybridation in situ.Nous avons également étudié si les neurones guidepost du CC étaient impliqués dans son agénésie (absence de CC), présente dans nombreux syndromes congénitaux chez 1 humain. Le gène homéotique Aristaless (Arx) contrôle la migration des neurones GABAergiques et sa mutation conduit à de nombreuses pathologies humaines, notamment la lissencéphalie liée à IX avec organes génitaux anormaux (XLAG) et agénésie du CC. Fait intéressant, nous avons constaté qu'ARX est exprimé dans toutes les populations GABAergiques guidepost du CC et que les embryons mutant pour Arx présentent une perte drastique de ces neurones accompagnée de défauts de navigation des axones (Niquille et al., en préparation). En outre, nous avons découvert que les souris déficientes pour le facteur de transcription ciliogenic RFX3 souffrent d'une agénésie du CC associé avec des défauts de mise en place de la ligne médiane et une désorganisation secondaire des neurones glutamatergiques guidepost (Benadiba et al., submitted). Ceci suggère fortement l'implication potentielle des deux types de neurones guidepost dans l'agénésie du CC chez l'humain.Ainsi, mon travail de thèse révèle de nouvelles fonctions pour ces neurones transitoires dans le guidage axonal et apporte de nouvelles perspectives sur les rôles respectifs des cellules neuronales et gliales dans ce processus.ABSTRACTRole of transient guidepost neurons in corpus callosum development and guidanceAxonal guidance is a key step that allows neurons to build specific synaptic connections and to specifically integrate in a functional neural network. Intermediate targets or guidepost cells act as critical elements that help to guide axons through long distance in the brain and provide information all along their travel. Subpopulations of midline glial cells have been shown to guide corpus callosum (CC) axons to the contralateral cerebral hemisphere. While neuronal cells are also present in the developing corpus callosum, their role still remains elusive.Our published results unravelled that, during embryonic development, the CC is populated in addition to astroglia by numerous glutamatergic and GABAergic guidepost neurons that are essential for the correct midline crossing of callosal axons (Niquille et al., PLoS Biology, 2009). In this work, I have combined morphological and 3D confocal imaging techniques to define the neuro- anatomical frame of our system. Moreover, with the use of in vitro transplantations in slices, co- explant experiments, siRNA manipulations on primary neuronal culture and in vivo analysis of knock-out mice we have been able to demonstrate that CC neurons direct callosal axon outgrowth, in part through the attractive action of Sema3C on its Npn-1 receptor.Recently, we have studied the origin, the dynamic aspects of these processes as well as the molecular mechanisms involved in the establishment of this axonal tract (Niquille et al., submitted). First, we have clarified the origin and the identity of the CC GABAergic guidepost neurons using extensive in vivo cell fate-mapping experiments. We identified two distinct GABAergic neuronal subpopulations, originating from the medial (MGE) and caudal (CGE) ganglionic eminences. I then studied in more details the dynamic interactions between CC neurons and callosal axons by confocal time-lapse video microscopy and I have also further characterized the role of each guidepost neuronal subpopulation in callosal guidance. Interestingly, MGE- and CGE-derived GABAergic neurons are both attractive for callosal axons in transplantation experiments. Finally, we have dissected the molecular basis of these guidance mechanisms by using FACS sorting combined with an extensive genetic screen for molecules of interest by a sensitive RT-PCR technique, as well as, in situ hybridization.I have also investigated whether CC guidepost neurons are involved in agenesis of the CC which occurs in numerous human congenital syndromes. Aristaless-related homeobox gene (Arx) regulates GABAergic neuron migration and its mutation leads to numerous human pathologies including X-linked lissencephaly with abnormal genitalia (XLAG) and severe CC agenesis. Interestingly, I found that ARX is expressed in all the guidepost GABAergic neuronal populations of the CC and that Arx-/- embryos exhibit a drastic loss of CC GABAergic interneurons accompanied by callosal axon navigation defects (Niquille et al, in preparation). In addition, we discovered that mice deficient for the ciliogenic transcription factor RFX3 suffer from CC agenesis associated with early midline patterning defects and a secondary disorganisation of guidepost glutamatergic neurons (Benadiba et al., submitted). This strongly points out the potential implication of both types of guidepost neurons in human CC agenesis.Taken together, my thesis work reveals novel functions for transient neurons in axonal guidance and brings new perspectives on the respective roles of neuronal and glial cells in these processes.
Resumo:
The cross-recognition of peptides by cytotoxic T lymphocytes is a key element in immunology and in particular in peptide based immunotherapy. Here we develop three-dimensional (3D) quantitative structure-activity relationships (QSARs) to predict cross-recognition by Melan-A-specific cytotoxic T lymphocytes of peptides bound to HLA A*0201 (hereafter referred to as HLA A2). First, we predict the structure of a set of self- and pathogen-derived peptides bound to HLA A2 using a previously developed ab initio structure prediction approach [Fagerberg et al., J. Mol. Biol., 521-46 (2006)]. Second, shape and electrostatic energy calculations are performed on a 3D grid to produce similarity matrices which are combined with a genetic neural network method [So et al., J. Med. Chem., 4347-59 (1997)] to generate 3D-QSAR models. The models are extensively validated using several different approaches. During the model generation, the leave-one-out cross-validated correlation coefficient (q (2)) is used as the fitness criterion and all obtained models are evaluated based on their q (2) values. Moreover, the best model obtained for a partitioned data set is evaluated by its correlation coefficient (r = 0.92 for the external test set). The physical relevance of all models is tested using a functional dependence analysis and the robustness of the models obtained for the entire data set is confirmed using y-randomization. Finally, the validated models are tested for their utility in the setting of rational peptide design: their ability to discriminate between peptides that only contain side chain substitutions in a single secondary anchor position is evaluated. In addition, the predicted cross-recognition of the mono-substituted peptides is confirmed experimentally in chromium-release assays. These results underline the utility of 3D-QSARs in peptide mimetic design and suggest that the properties of the unbound epitope are sufficient to capture most of the information to determine the cross-recognition.
Resumo:
The original cefepime product was withdrawn from the Swiss market in January 2007, and replaced by a generic 10 months later. The goals of the study were to assess the impact of this cefepime shortage on the use and costs of alternative broad-spectrum antibiotics, on antibiotic policy, and on resistance of Pseudomonas aeruginosa towards carbapenems, ceftazidime and piperacillin-tazobactam. A generalized regression-based interrupted time series model assessed how much the shortage changed the monthly use and costs of cefepime and of selected alternative broad-spectrum antibiotics (ceftazidime, imipenem-cilastatin, meropenem, piperacillin-tazobactam) in 15 Swiss acute care hospitals from January 2005 to December 2008. Resistance of P. aeruginosa was compared before and after the cefepime shortage. There was a statistically significant increase in the consumption of piperacillin-tazobactam in hospitals with definitive interruption of cefepime supply, and of meropenem in hospitals with transient interruption of cefepime supply. Consumption of each alternative antibiotic tended to increase during the cefepime shortage and to decrease when the cefepime generic was released. These shifts were associated with significantly higher overall costs. There was no significant change in hospitals with uninterrupted cefepime supply. The alternative antibiotics for which an increase in consumption showed the strongest association with a progression of resistance were the carbapenems. The use of alternative antibiotics after cefepime withdrawal was associated with a significant increase in piperacillin-tazobactam and meropenem use and in overall costs, and with a decrease in susceptibility of P. aeruginosa in hospitals. This warrants caution with regard to shortages and withdrawals of antibiotics.
Resumo:
A parametric procedure for the blind inversion of nonlinear channels is proposed, based on a recent method of blind source separation in nonlinear mixtures. Experiments show that the proposed algorithms perform efficiently, even in the presence of hard distortion. The method, based on the minimization of the output mutual information, needs the knowledge of log-derivative of input distribution (the so-called score function). Each algorithm consists of three adaptive blocks: one devoted to adaptive estimation of the score function, and two other blocks estimating the inverses of the linear and nonlinear parts of the channel, (quasi-)optimally adapted using the estimated score functions. This paper is mainly concerned by the nonlinear part, for which we propose two parametric models, the first based on a polynomial model and the second on a neural network, while [14, 15] proposed non-parametric approaches.
Resumo:
In this work we present a simulation of a recognition process with perimeter characterization of a simple plant leaves as a unique discriminating parameter. Data coding allowing for independence of leaves size and orientation may penalize performance recognition for some varieties. Border description sequences are then used to characterize the leaves. Independent Component Analysis (ICA) is then applied in order to study which is the best number of components to be considered for the classification task, implemented by means of an Artificial Neural Network (ANN). Obtained results with ICA as a pre-processing tool are satisfactory, and compared with some references our system improves the recognition success up to 80.8% depending on the number of considered independent components.
Resumo:
In this work we explore the multivariate empirical mode decomposition combined with a Neural Network classifier as technique for face recognition tasks. Images are simultaneously decomposed by means of EMD and then the distance between the modes of the image and the modes of the representative image of each class is calculated using three different distance measures. Then, a neural network is trained using 10- fold cross validation in order to derive a classifier. Preliminary results (over 98 % of classification rate) are satisfactory and will justify a deep investigation on how to apply mEMD for face recognition.