33 resultados para Model driven developments
Resumo:
Abnormal development can lead to deficits in adult brain function, a trajectory likely underlying adolescent-onset psychiatric conditions such as schizophrenia. Developmental manipulations yielding adult deficits in rodents provide an opportunity to explore mechanisms involved in a delayed emergence of anomalies driven by developmental alterations. Here we assessed whether oxidative stress during presymptomatic stages causes adult anomalies in rats with a neonatal ventral hippocampal lesion, a developmental rodent model useful for schizophrenia research. Juvenile and adolescent treatment with the antioxidant N-acetyl cysteine prevented the reduction of prefrontal parvalbumin interneuron activity observed in this model, as well as electrophysiological and behavioral deficits relevant to schizophrenia. Adolescent treatment with the glutathione peroxidase mimic ebselen also reversed behavioral deficits in this animal model. These findings suggest that presymptomatic oxidative stress yields abnormal adult brain function in a developmentally compromised brain, and highlight redox modulation as a potential target for early intervention.
Resumo:
An epidemic model is formulated by a reactionâeuro"diffusion system where the spatial pattern formation is driven by cross-diffusion. The reaction terms describe the local dynamics of susceptible and infected species, whereas the diffusion terms account for the spatial distribution dynamics. For both self-diffusion and cross-diffusion, nonlinear constitutive assumptions are suggested. To simulate the pattern formation two finite volume formulations are proposed, which employ a conservative and a non-conservative discretization, respectively. An efficient simulation is obtained by a fully adaptive multiresolution strategy. Numerical examples illustrate the impact of the cross-diffusion on the pattern formation.
Resumo:
Protein-ligand docking has made important progress during the last decade and has become a powerful tool for drug development, opening the way to virtual high throughput screening and in silico structure-based ligand design. Despite the flattering picture that has been drawn, recent publications have shown that the docking problem is far from being solved, and that more developments are still needed to achieve high successful prediction rates and accuracy. Introducing an accurate description of the solvation effect upon binding is thought to be essential to achieve this goal. In particular, EADock uses the Generalized Born Molecular Volume 2 (GBMV2) solvent model, which has been shown to reproduce accurately the desolvation energies calculated by solving the Poisson equation. Here, the implementation of the Fast Analytical Continuum Treatment of Solvation (FACTS) as an implicit solvation model in small molecules docking calculations has been assessed using the EADock docking program. Our results strongly support the use of FACTS for docking. The success rates of EADock/FACTS and EADock/GBMV2 are similar, i.e. around 75% for local docking and 65% for blind docking. However, these results come at a much lower computational cost: FACTS is 10 times faster than GBMV2 in calculating the total electrostatic energy, and allows a speed up of EADock by a factor of 4. This study also supports the EADock development strategy relying on the CHARMM package for energy calculations, which enables straightforward implementation and testing of the latest developments in the field of Molecular Modeling.
Resumo:
Functional neuroimaging has undergone spectacular developments in recent years. Paradoxically, its neurobiological bases have remained elusive, resulting in an intense debate around the cellular mechanisms taking place upon activation that could contribute to the signals measured. Taking advantage of a modeling approach, we propose here a coherent neurobiological framework that not only explains several in vitro and in vivo observations but also provides a physiological basis to interpret imaging signals. First, based on a model of compartmentalized energy metabolism, we show that complex kinetics of NADH changes observed in vitro can be accounted for by distinct metabolic responses in two cell populations reminiscent of neurons and astrocytes. Second, extended application of the model to an in vivo situation allowed us to reproduce the evolution of intraparenchymal oxygen levels upon activation as measured experimentally without substantially altering the initial parameter values. Finally, applying the same model to functional neuroimaging in humans, we were able to determine that the early negative component of the blood oxygenation level-dependent response recorded with functional MRI, known as the initial dip, critically depends on the oxidative response of neurons, whereas the late aspects of the signal correspond to a combination of responses from cell types with two distinct metabolic profiles that could be neurons and astrocytes. In summary, our results, obtained with such a modeling approach, support the concept that both neuronal and glial metabolic responses form essential components of neuroimaging signals.
Resumo:
Many mucosal pathogens invade the host by initially infecting the organized mucosa-associated lymphoid tissue (o-MALT) such as Peyer's patches or nasal cavity-associated lymphoid tissue (NALT) before spreading systemically. There is no clear demonstration that serum antibodies can prevent infections in o-MALT. We have tested this possibility by using the mouse mammary tumor virus (MMTV) as a model system. In peripheral lymph nodes or in Peyer's patches or NALT, MMTV initially infects B lymphocytes, which as a consequence express a superantigen (SAg) activity. The SAg molecule induces the local activation of a subset of T cells within 6 days after MMTV infection. We report that similar levels of anti-SAg antibody (immunoglobulin G) in serum were potent inhibitors of the SAg-induced T-cell response both in peripheral lymph nodes and in Peyer's patches or NALT. This result clearly demonstrates that systemic antibodies can gain access to Peyer's patches or NALT.
Resumo:
Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.
Resumo:
Abstract : This work is concerned with the development and application of novel unsupervised learning methods, having in mind two target applications: the analysis of forensic case data and the classification of remote sensing images. First, a method based on a symbolic optimization of the inter-sample distance measure is proposed to improve the flexibility of spectral clustering algorithms, and applied to the problem of forensic case data. This distance is optimized using a loss function related to the preservation of neighborhood structure between the input space and the space of principal components, and solutions are found using genetic programming. Results are compared to a variety of state-of--the-art clustering algorithms. Subsequently, a new large-scale clustering method based on a joint optimization of feature extraction and classification is proposed and applied to various databases, including two hyperspectral remote sensing images. The algorithm makes uses of a functional model (e.g., a neural network) for clustering which is trained by stochastic gradient descent. Results indicate that such a technique can easily scale to huge databases, can avoid the so-called out-of-sample problem, and can compete with or even outperform existing clustering algorithms on both artificial data and real remote sensing images. This is verified on small databases as well as very large problems. Résumé : Ce travail de recherche porte sur le développement et l'application de méthodes d'apprentissage dites non supervisées. Les applications visées par ces méthodes sont l'analyse de données forensiques et la classification d'images hyperspectrales en télédétection. Dans un premier temps, une méthodologie de classification non supervisée fondée sur l'optimisation symbolique d'une mesure de distance inter-échantillons est proposée. Cette mesure est obtenue en optimisant une fonction de coût reliée à la préservation de la structure de voisinage d'un point entre l'espace des variables initiales et l'espace des composantes principales. Cette méthode est appliquée à l'analyse de données forensiques et comparée à un éventail de méthodes déjà existantes. En second lieu, une méthode fondée sur une optimisation conjointe des tâches de sélection de variables et de classification est implémentée dans un réseau de neurones et appliquée à diverses bases de données, dont deux images hyperspectrales. Le réseau de neurones est entraîné à l'aide d'un algorithme de gradient stochastique, ce qui rend cette technique applicable à des images de très haute résolution. Les résultats de l'application de cette dernière montrent que l'utilisation d'une telle technique permet de classifier de très grandes bases de données sans difficulté et donne des résultats avantageusement comparables aux méthodes existantes.
Resumo:
PURPOSE: Abdominal aortic aneurysms (AAAs) expand because of aortic wall destruction. Enrichment in Vascular Smooth Muscle Cells (VSMCs) stabilizes expanding AAAs in rats. Mesenchymal Stem Cells (MSCs) can differentiate into VSMCs. We have tested the hypothesis that bone marrow-derived MSCs (BM-MSCs) stabilizes AAAs in a rat model. MATERIAL AND METHODS: Rat Fischer 344 BM-MSCs were isolated by plastic adhesion and seeded endovascularly in experimental AAAs using xenograft obtained from guinea pig. Culture medium without cells was used as control group. The main criteria was the variation of the aortic diameter at one week and four weeks. We evaluated the impact of cells seeding on inflammatory response by immunohistochemistry combined with RT-PCR on MMP9 and TIMP1 at one week. We evaluated the healing process by immunohistochemistry at 4 weeks. RESULTS: The endovascular seeding of BM-MSCs decreased AAA diameter expansion more powerfully than VSMCs or culture medium infusion (6.5% ± 9.7, 25.5% ± 17.2 and 53.4% ± 14.4; p = .007, respectively). This result was sustained at 4 weeks. BM-MSCs decreased expression of MMP-9 and infiltration by macrophages (4.7 ± 2.3 vs. 14.6 ± 6.4 mm(2) respectively; p = .015), increased Tissue Inhibitor Metallo Proteinase-1 (TIMP-1), compared to culture medium infusion. BM-MSCs induced formation of a neo-aortic tissue rich in SM-alpha active positive cells (22.2 ± 2.7 vs. 115.6 ± 30.4 cells/surface units, p = .007) surrounded by a dense collagen and elastin network covered by luminal endothelial cells. CONCLUSIONS: We have shown in this rat model of AAA that BM-MSCs exert a specialized function in arterial regeneration that transcends that of mature mesenchymal cells. Our observation identifies a population of cells easy to isolate and to expand for therapeutic interventions based on catheter-driven cell therapy.
Resumo:
The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.
Resumo:
For patients with chronic lung diseases, such as chronic obstructive pulmonary disease (COPD), exacerbations are life-threatening events causing acute respiratory distress that can even lead to hospitalization and death. Although a great deal of effort has been put into research of exacerbations and potential treatment options, the exact underlying mechanisms are yet to be deciphered and no therapy that effectively targets the excessive inflammation is available. In this study, we report that interleukin-1β (IL-1β) and interleukin-17A (IL-17A) are key mediators of neutrophilic inflammation in influenza-induced exacerbations of chronic lung inflammation. Using a mouse model of disease, our data shows a role for IL-1β in mediating lung dysfunction, and in driving neutrophilic inflammation during the whole phase of viral infection. We further report a role for IL-17A as a mediator of IL-1β induced neutrophilia at early time points during influenza-induced exacerbations. Blocking of IL-17A or IL-1 resulted in a significant abrogation of neutrophil recruitment to the airways in the initial phase of infection or at the peak of viral replication, respectively. Therefore, IL-17A and IL-1β are potential targets for therapeutic treatment of viral exacerbations of chronic lung inflammation.
Resumo:
The continuous production of vascular tissues through secondary growth results in radial thickening of plant organs and is pivotal for various aspects of plant growth and physiology, such as water transport capacity or resistance to mechanical stress. It is driven by the vascular cambium, which produces inward secondary xylem and outward secondary phloem. In the herbaceous plant Arabidopsis thaliana (Arabidopsis), secondary growth occurs in stems, in roots and in the hypocotyl. In the latter, radial growth is most prominent and not obscured by parallel ongoing elongation growth. Moreover, its progression is reminiscent of the secondary growth mode of tree trunks. Thus, the Arabidopsis hypocotyl is a very good model to study basic molecular mechanisms of secondary growth. Genetic approaches have succeeded in the identification of various factors, including peptides, receptors, transcription factors and hormones, which appear to participate in a complex network that controls radial growth. Many of these players are conserved between herbaceous and woody plants. In this review, we will focus on what is known about molecular mechanisms and regulators of vascular secondary growth in the Arabidopsis hypocotyl.
Resumo:
Forensic intelligence is a distinct dimension of forensic science. Forensic intelligence processes have mostly been developed to address either a specific type of trace or a specific problem. Even though these empirical developments have led to successes, they are trace-specific in nature and contribute to the generation of silos which hamper the establishment of a more general and transversal model. Forensic intelligence has shown some important perspectives but more general developments are required to address persistent challenges. This will ensure the progress of the discipline as well as its widespread implementation in the future. This paper demonstrates that the description of forensic intelligence processes, their architectures, and the methods for building them can, at a certain level, be abstracted from the type of traces considered. A comparative analysis is made between two forensic intelligence approaches developed independently in Australia and in Europe regarding the monitoring of apparently very different kind of problems: illicit drugs and false identity documents. An inductive effort is pursued to identify similarities and to outline a general model. Besides breaking barriers between apparently separate fields of study in forensic science and intelligence, this transversal model would assist in defining forensic intelligence, its role and place in policing, and in identifying its contributions and limitations. The model will facilitate the paradigm shift from the current case-by-case reactive attitude towards a proactive approach by serving as a guideline for the use of forensic case data in an intelligence-led perspective. A follow-up article will specifically address issues related to comparison processes, decision points and organisational issues regarding forensic intelligence (part II).
Resumo:
How a stimulus or a task alters the spontaneous dynamics of the brain remains a fundamental open question in neuroscience. One of the most robust hallmarks of task/stimulus-driven brain dynamics is the decrease of variability with respect to the spontaneous level, an effect seen across multiple experimental conditions and in brain signals observed at different spatiotemporal scales. Recently, it was observed that the trial-to-trial variability and temporal variance of functional magnetic resonance imaging (fMRI) signals decrease in the task-driven activity. Here we examined the dynamics of a large-scale model of the human cortex to provide a mechanistic understanding of these observations. The model allows computing the statistics of synaptic activity in the spontaneous condition and in putative tasks determined by external inputs to a given subset of brain regions. We demonstrated that external inputs decrease the variance, increase the covariances, and decrease the autocovariance of synaptic activity as a consequence of single node and large-scale network dynamics. Altogether, these changes in network statistics imply a reduction of entropy, meaning that the spontaneous synaptic activity outlines a larger multidimensional activity space than does the task-driven activity. We tested this model's prediction on fMRI signals from healthy humans acquired during rest and task conditions and found a significant decrease of entropy in the stimulus-driven activity. Altogether, our study proposes a mechanism for increasing the information capacity of brain networks by enlarging the volume of possible activity configurations at rest and reliably settling into a confined stimulus-driven state to allow better transmission of stimulus-related information.
Resumo:
The development of dysfunctional or exhausted T cells is characteristic of immune responses to chronic viral infections and cancer. Exhausted T cells are defined by reduced effector function, sustained upregulation of multiple inhibitory receptors, an altered transcriptional program and perturbations of normal memory development and homeostasis. This review focuses on (a) illustrating milestone discoveries that led to our present understanding of T cell exhaustion, (b) summarizing recent developments in the field, and (c) identifying new challenges for translational research. Exhausted T cells are now recognized as key therapeutic targets in human infections and cancer. Much of our knowledge of the clinically relevant process of exhaustion derives from studies in the mouse model of Lymphocytic choriomeningitis virus (LCMV) infection. Studies using this model have formed the foundation for our understanding of human T cell memory and exhaustion. We will use this example to discuss recent advances in our understanding of T cell exhaustion and illustrate the value of integrated mouse and human studies and will emphasize the benefits of bi-directional mouse-to-human and human-to-mouse research approaches.
Resumo:
Obesity is associated with chronic food intake disorders and binge eating. Food intake relies on the interaction between homeostatic regulation and hedonic signals among which, olfaction is a major sensory determinant. However, its potential modulation at the peripheral level by a chronic energy imbalance associated to obese status remains a matter of debate. We further investigated the olfactory function in a rodent model relevant to the situation encountered in obese humans, where genetic susceptibility is juxtaposed on chronic eating disorders. Using several olfactory-driven tests, we compared the behaviors of obesity-prone Sprague-Dawley rats (OP) fed with a high-fat/high-sugar diet with those of obese-resistant ones fed with normal chow. In OP rats, we reported 1) decreased odor threshold, but 2) poor olfactory performances, associated with learning/memory deficits, 3) decreased influence of fasting, and 4) impaired insulin control on food seeking behavior. Associated with these behavioral modifications, we found a modulation of metabolism-related factors implicated in 1) electrical olfactory signal regulation (insulin receptor), 2) cellular dynamics (glucorticoids receptors, pro- and antiapoptotic factors), and 3) homeostasis of the olfactory mucosa and bulb (monocarboxylate and glucose transporters). Such impairments might participate to the perturbed daily food intake pattern that we observed in obese animals.