922 resultados para Application specific algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT Asthma is a complex inflammatory syndrome caused by environmental factors in predisposed individuals (atopics). Its severity correlates with the presence of activated T lymphocytes and eosinophils in the bronchoalveolar lavage fluid (BALF). Induction of tolerance via the nasal route results in reduced recruitment of eosinophils into BALF upon challenge, inhibition of TH2 pro-inflammatory cytokine secretion and T cell hyporesponsiveness. Recently, CD4+CD25+ natural regulatory T cells (Treg) were proposed as key players in controlling the development of asthma and allergic disease. The objective of the present study is to investigate the role of CD4+CD25+ regulatory T cells in the mechanisms leading to tolerance in an established model of asthma. In this goal we depleted CD4+CD25+ T cells at different times during asthma and tolerance induction protocol in mice and looked at efficiency of tolerization (intranasal application of high dose of allergen) in the absence of natural Tregs. First, ovalbumin-sensitized mice were depleted of CD25+ T cells by intraperitoneal injection of anti-CD25 mAb (PC61) either for along-term (repeated injections of anti-CD25 from day 31 until the end of the protocol) or a short-term period (single injection of anti-CD25 before or after tolerance induction). We demonstrated that the long-term depletion of CD4+CD25+ T cells severely hampered tolerance induction (marked enhancement in eosinophil recruitment into BALF and a vigorous antigen specific T cell response to OVA upon allergen challenge) whereas transient depletions were not sufficient to do so. We then characterized T cell subsets by flow cytometry and observed that a large part of CD4+CD25+ T cells express Foxp3, an established marker of regulatory T cells. We also tested in-vitro suppressor activity of CD4+CD25+ T cells from tolerized mice by cell proliferation assay in coculture and observed a strong suppressive activity. Our data suggest that CD4+CD25+ T cells with regulatory properties play a crucial role in the induction of tolerance via the nasal route. The relationship between CD25+ natural Treg and inducible IL-10+ TRl-type Treg will have to be defined. RESUME L'asthme est un syndrome inflammatoire complexe provoqué par des facteurs environnementaux chez des individus génétiquement prédisposés (atopiques). Sa sévérité corrèle avec la présence des lymphocytes T activés et d'éosinophiles dans le lavage bronchoalvéolaire (BAL). L'induction de la tolérance par la voie nasale résulte en une diminution du recrutement des eosinophils dans le BAL, une inhibition de la sécrétion de cytokines pro-inflammatoires de type TH2 et de l'hypo-réponse des cellules T à l'allergène. Récemment, les cellules régulatrices «naturelles » de type CD4+CD25+ T (Tregs) ont été proposées comme acteurs essentiels dans le développement de l'asthme et de l'allergie. L'objectif de cette étude est d'étudier le rôle des cellules régulatrices CD4+CD25+ T dans les mécanismes menant à la tolérance dans un modèle établi d'asthme. Dans ce but nous avons déplété les cellules de CD4+CD25+ T à différents temps au cours du protocole d'induction d'asthme et de tolérance et nous avons regardé l'efficacité de l'induction de tolérance (application intranasale d'une dose importante d'allergène) en l'absence de Tregs. Dans un premier temps des souris sensibilisées à l'ovalbumine (OVA) ont été déplétées en cellules CD25+ T par l'injection intrapéritonéale d'anti-CD25 mAb (PC61) pour une longue période (injections répétées d'anti-CD25 du jour 31 jusqu'à la fin du protocole) ou pour une courte période (injection unique d'anti-CD25 avant ou après l'induction de tolérance). Nous avons démontré que la déplétion à long t erme des cellules de CD4+CD25+ T a empêché l'induction de tolérance (recrutement accru d'éosinophiles dans le BAL et une réponse vigoureuse des cellules T spécifiques de l'antigène après exposition à l'allergène) tandis des déplétions à court-terme n'ont pas cet effet. Nous avons ensuite caractérisé des sous-populations de cellules T par cytométrie de flux. Nous avons observé que la majorité des cellules CD4+CD25+ T expriment Foxp3, un marqueur établi des cellules régulatrices. Nous avons également examiné in vitro l'activité régulatrice des cellules T CD4+CD25+ issues de souris tolérisées. La prolifération de cellules T en coculture a démontré une forte activité suppressive des cellules CD4+CD25+. Nos données suggèrent que des cellules T CD4+CD25+ ayant des propriétés régulatrices jouent un rôle crucial dans l'induction de la tolérance par la voie nasale. Le rapport entre les cellules régulatrices naturelles CD4+CD25+ et les cellules régulatrices inductible de type TR1 I1-10+ devra être défini. RESUME DESTINE A UN LARGE PUBLIC L'asthme est une maladie inflammatoire des bronches, caractérisée par des crises de dyspnée (gêne respiratoire) témoignant d'une activation brutale des muscles bronchoconstricteurs, auxquelles s'associent un oedème et une hypersécrétion des muqueuses des voies aériennes ainsi qu'une importante production d'anticorps de l'allergie (IgE). Chez la plupart des enfants atteints et chez près de la moitié des adultes concernés par l'asthme, c'est une allergie à des substances présentes dans l'air environnant (acariens, pollens ou poils d'animaux) qui est à l'origine de la maladie. . Le traitement actuel de l'asthme repose d'une part sur le soulagement des symptômes grâce à des produits à base de stéroïdes ou des bronchodilatateurs. D'autre part, l'immunothérapie spécifique (aussi appelée désensibilisation) permet d'améliorer l'asthme et de «reprogrammer» le système immunitaire. C'est à ce jour, le seul moyen connu de faire régresser une allergie. Cependant l'immunothérapie prend beaucoup de temps (3 à 5 ans) et ne marche pas à tous les coups ni pour tous les antigènes. Il est donc important de mieux comprendre les mécanismes impliqués lors d'un tel traitement afin d'en améliorer l'efficacité. Af n de pouvoir investiguer en détail ces mécanismes des modèles d'immunothérapie ont été mis au point chez la souris. Notre étude se base sur un modèle d'asthme allergique chez la souris. Des souris sont rendues allergiques à l'ovalbumine (OVA) et présentent alors les caractéristiques majeures de l'asthme humain (recrutement de cellules inflammatoires dans les poumons, augmentation de la production d'IgE et de la résistance des bronches aux flux respiratoires). Ces souris asthmatiques une fois traitées par l'application nasale d'OVA (forme d'immunothérapie muqueuse) ne développent plus de réaction allergique lors d'une ré-exposition à l'allergène. Notre hypothèse est que cette «guérison» (tolérance) est liée à l'action de cellules (lymphocytes T CD4) dites «régulatrices» et caractérisées par le marqueur CD25. Pour le démontrer, nous avons éliminé ces cellules «régulatrices» CD25 de nos souris asthmatiques grâce à un anticorps monoclonal spécifique. Nous n'avons dès lors plus été en mesure d'induire une tolérance à l'allergène. Ceci suggère donc un rôle clé des cellules «régulatrices» T CD4+CD25+ dans la réussite de l'immunothérapie nasale dans notre modèle. Nos résultats n'excluent pas la participation d'autres cellules telles que les lymphocytes producteurs d'IL-10 (lymphocytes régulateurs induits). Le rôle respectif de ces sous-populations régulatrices devra être examiné dans les études à venir. Une meilleure maîtrise des mécanismes de régulation pourrait s'avérer cruciale pour améliorer les thérapies de l'asthme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large numbers and functionally competent T cells are required to protect from diseases for which antibody-based vaccines have consistently failed (1), which is the case for many chronic viral infections and solid tumors. Therefore, therapeutic vaccines aim at the induction of strong antigen-specific T-cell responses. Novel adjuvants have considerably improved the capacity of synthetic vaccines to activate T cells, but more research is necessary to identify optimal compositions of potent vaccine formulations. Consequently, there is a great need to develop accurate methods for the efficient identification of antigen-specific T cells and the assessment of their functional characteristics directly ex vivo. In this regard, hundreds of clinical vaccination trials have been implemented during the last 15 years, and monitoring techniques become more and more standardized.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

NMDA receptors (NMDARs) mediate ischemic brain damage, for which interactions between the C termini of NR2 subunits and PDZ domain proteins within the NMDAR signaling complex (NSC) are emerging therapeutic targets. However, expression of NMDARs in a non-neuronal context, lacking many NSC components, can still induce cell death. Moreover, it is unclear whether targeting the NSC will impair NMDAR-dependent prosurvival and plasticity signaling. We show that the NMDAR can promote death signaling independently of the NR2 PDZ ligand, when expressed in non-neuronal cells lacking PSD-95 and neuronal nitric oxide synthase (nNOS), key PDZ proteins that mediate neuronal NMDAR excitotoxicity. However, in a non-neuronal context, the NMDAR promotes cell death solely via c-Jun N-terminal protein kinase (JNK), whereas NMDAR-dependent cortical neuronal death is promoted by both JNK and p38. NMDAR-dependent pro-death signaling via p38 relies on neuronal context, although death signaling by JNK, triggered by mitochondrial reactive oxygen species production, does not. NMDAR-dependent p38 activation in neurons is triggered by submembranous Ca(2+), and is disrupted by NOS inhibitors and also a peptide mimicking the NR2B PDZ ligand (TAT-NR2B9c). TAT-NR2B9c reduced excitotoxic neuronal death and p38-mediated ischemic damage, without impairing an NMDAR-dependent plasticity model or prosurvival signaling to CREB or Akt. TAT-NR2B9c did not inhibit JNK activation, and synergized with JNK inhibitors to ameliorate severe excitotoxic neuronal loss in vitro and ischemic cortical damage in vivo. Thus, NMDAR-activated signals comprise pro-death pathways with differing requirements for PDZ protein interactions. These signals are amenable to selective inhibition, while sparing synaptic plasticity and prosurvival signaling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract The solvability of the problem of fair exchange in a synchronous system subject to Byzantine failures is investigated in this work. The fair exchange problem arises when a group of processes are required to exchange digital items in a fair manner, which means that either each process obtains the item it was expecting or no process obtains any information on, the inputs of others. After introducing a novel specification of fair exchange that clearly separates safety and liveness, we give an overview of the difficulty of solving such a problem in the context of a fully-connected topology. On one hand, we show that no solution to fair exchange exists in the absence of an identified process that every process can trust a priori; on the other, a well-known solution to fair exchange relying on a trusted third party is recalled. These two results lead us to complete our system model with a flexible representation of the notion of trust. We then show that fair exchange is solvable if and only if a connectivity condition, named the reachable majority condition, is satisfied. The necessity of the condition is proven by an impossibility result and its sufficiency by presenting a general solution to fair exchange relying on a set of trusted processes. The focus is then turned towards a specific network topology in order to provide a fully decentralized, yet realistic, solution to fair exchange. The general solution mentioned above is optimized by reducing the computational load assumed by trusted processes as far as possible. Accordingly, our fair exchange protocol relies on trusted tamperproof modules that have limited communication abilities and are only required in key steps of the algorithm. This modular solution is then implemented in the context of a pedagogical application developed for illustrating and apprehending the complexity of fair exchange. This application, which also includes the implementation of a wide range of Byzantine behaviors, allows executions of the algorithm to be set up and monitored through a graphical display. Surprisingly, some of our results on fair exchange seem contradictory with those found in the literature of secure multiparty computation, a problem from the field of modern cryptography, although the two problems have much in common. Both problems are closely related to the notion of trusted third party, but their approaches and descriptions differ greatly. By introducing a common specification framework, a comparison is proposed in order to clarify their differences and the possible origins of the confusion between them. This leads us to introduce the problem of generalized fair computation, a generalization of fair exchange. Finally, a solution to this new problem is given by generalizing our modular solution to fair exchange

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les reconstructions palinspastiques fournissent le cadre idéal à de nombreuses études géologiques, géographiques, océanographique ou climatiques. En tant qu?historiens de la terre, les "reconstructeurs" essayent d?en déchiffrer le passé. Depuis qu?ils savent que les continents bougent, les géologues essayent de retracer leur évolution à travers les âges. Si l?idée originale de Wegener était révolutionnaire au début du siècle passé, nous savons depuis le début des années « soixante » que les continents ne "dérivent" pas sans but au milieu des océans mais sont inclus dans un sur-ensemble associant croûte « continentale » et « océanique »: les plaques tectoniques. Malheureusement, pour des raisons historiques aussi bien que techniques, cette idée ne reçoit toujours pas l'écho suffisant parmi la communauté des reconstructeurs. Néanmoins, nous sommes intimement convaincus qu?en appliquant certaines méthodes et certains principes il est possible d?échapper à l?approche "Wégenerienne" traditionnelle pour enfin tendre vers la tectonique des plaques. Le but principal du présent travail est d?exposer, avec tous les détails nécessaires, nos outils et méthodes. Partant des données paléomagnétiques et paléogéographiques classiquement utilisées pour les reconstructions, nous avons développé une nouvelle méthodologie replaçant les plaques tectoniques et leur cinématique au coeur du problème. En utilisant des assemblages continentaux (aussi appelés "assemblées clés") comme des points d?ancrage répartis sur toute la durée de notre étude (allant de l?Eocène jusqu?au Cambrien), nous développons des scénarios géodynamiques permettant de passer de l?une à l?autre en allant du passé vers le présent. Entre deux étapes, les plaques lithosphériques sont peu à peu reconstruites en additionnant/ supprimant les matériels océaniques (symbolisés par des isochrones synthétiques) aux continents. Excepté lors des collisions, les plaques sont bougées comme des entités propres et rigides. A travers les âges, les seuls éléments évoluant sont les limites de plaques. Elles sont préservées aux cours du temps et suivent une évolution géodynamique consistante tout en formant toujours un réseau interconnecté à travers l?espace. Cette approche appelée "limites de plaques dynamiques" intègre de multiples facteurs parmi lesquels la flottabilité des plaques, les taux d'accrétions aux rides, les courbes de subsidence, les données stratigraphiques et paléobiogéographiques aussi bien que les évènements tectoniques et magmatiques majeurs. Cette méthode offre ainsi un bon contrôle sur la cinématique des plaques et fournit de sévères contraintes au modèle. Cette approche "multi-source" nécessite une organisation et une gestion des données efficaces. Avant le début de cette étude, les masses de données nécessaires était devenues un obstacle difficilement surmontable. Les SIG (Systèmes d?Information Géographiques) et les géo-databases sont des outils informatiques spécialement dédiés à la gestion, au stockage et à l?analyse des données spatialement référencées et de leurs attributs. Grâce au développement dans ArcGIS de la base de données PaleoDyn nous avons pu convertir cette masse de données discontinues en informations géodynamiques précieuses et facilement accessibles pour la création des reconstructions. Dans le même temps, grâce à des outils spécialement développés, nous avons, tout à la fois, facilité le travail de reconstruction (tâches automatisées) et amélioré le modèle en développant fortement le contrôle cinématique par la création de modèles de vitesses des plaques. Sur la base des 340 terranes nouvellement définis, nous avons ainsi développé un set de 35 reconstructions auxquelles est toujours associé un modèle de vitesse. Grâce à cet ensemble de données unique, nous pouvons maintenant aborder des problématiques majeurs de la géologie moderne telles que l?étude des variations du niveau marin et des changements climatiques. Nous avons commencé par aborder un autre problème majeur (et non définitivement élucidé!) de la tectonique moderne: les mécanismes contrôlant les mouvements des plaques. Nous avons pu observer que, tout au long de l?histoire de la terre, les pôles de rotation des plaques (décrivant les mouvements des plaques à la surface de la terre) tendent à se répartir le long d'une bande allant du Pacifique Nord au Nord de l'Amérique du Sud, l'Atlantique Central, l'Afrique du Nord, l'Asie Centrale jusqu'au Japon. Fondamentalement, cette répartition signifie que les plaques ont tendance à fuir ce plan médian. En l'absence d'un biais méthodologique que nous n'aurions pas identifié, nous avons interprété ce phénomène comme reflétant l'influence séculaire de la Lune sur le mouvement des plaques. La Lune sur le mouvement des plaques. Le domaine océanique est la clé de voute de notre modèle. Nous avons attaché un intérêt tout particulier à le reconstruire avec beaucoup de détails. Dans ce modèle, la croûte océanique est préservée d?une reconstruction à l?autre. Le matériel crustal y est symbolisé sous la forme d?isochrones synthétiques dont nous connaissons les âges. Nous avons également reconstruit les marges (actives ou passives), les rides médio-océaniques et les subductions intra-océaniques. En utilisant ce set de données très détaillé, nous avons pu développer des modèles bathymétriques 3-D unique offrant une précision bien supérieure aux précédents.<br/><br/>Palinspastic reconstructions offer an ideal framework for geological, geographical, oceanographic and climatology studies. As historians of the Earth, "reconstructers" try to decipher the past. Since they know that continents are moving, geologists a trying to retrieve the continents distributions through ages. If Wegener?s view of continent motions was revolutionary at the beginning of the 20th century, we know, since the Early 1960?s that continents are not drifting without goal in the oceanic realm but are included in a larger set including, all at once, the oceanic and the continental crust: the tectonic plates. Unfortunately, mainly due to technical and historical issues, this idea seems not to receive a sufficient echo among our particularly concerned community. However, we are intimately convinced that, by applying specific methods and principles we can escape the traditional "Wegenerian" point of view to, at last, reach real plate tectonics. This is the main aim of this study to defend this point of view by exposing, with all necessary details, our methods and tools. Starting with the paleomagnetic and paleogeographic data classically used in reconstruction studies, we developed a modern methodology placing the plates and their kinematics at the centre of the issue. Using assemblies of continents (referred as "key assemblies") as anchors distributed all along the scope of our study (ranging from Eocene time to Cambrian time) we develop geodynamic scenarios leading from one to the next, from the past to the present. In between, lithospheric plates are progressively reconstructed by adding/removing oceanic material (symbolized by synthetic isochrones) to major continents. Except during collisions, plates are moved as single rigid entities. The only evolving elements are the plate boundaries which are preserved and follow a consistent geodynamical evolution through time and form an interconnected network through space. This "dynamic plate boundaries" approach integrates plate buoyancy factors, oceans spreading rates, subsidence patterns, stratigraphic and paleobiogeographic data, as well as major tectonic and magmatic events. It offers a good control on plate kinematics and provides severe constraints for the model. This multi-sources approach requires an efficient data management. Prior to this study, the critical mass of necessary data became a sorely surmountable obstacle. GIS and geodatabases are modern informatics tools of specifically devoted to store, analyze and manage data and associated attributes spatially referenced on the Earth. By developing the PaleoDyn database in ArcGIS software we converted the mass of scattered data offered by the geological records into valuable geodynamical information easily accessible for reconstructions creation. In the same time, by programming specific tools we, all at once, facilitated the reconstruction work (tasks automation) and enhanced the model (by highly increasing the kinematic control of plate motions thanks to plate velocity models). Based on the 340 terranes properly defined, we developed a revised set of 35 reconstructions associated to their own velocity models. Using this unique dataset we are now able to tackle major issues of the geology (such as the global sea-level variations and climate changes). We started by studying one of the major unsolved issues of the modern plate tectonics: the driving mechanism of plate motions. We observed that, all along the Earth?s history, plates rotation poles (describing plate motions across the Earth?s surface) tend to follow a slight linear distribution along a band going from the Northern Pacific through Northern South-America, Central Atlantic, Northern Africa, Central Asia up to Japan. Basically, it sighifies that plates tend to escape this median plan. In the absence of a non-identified methodological bias, we interpreted it as the potential secular influence ot the Moon on plate motions. The oceanic realms are the cornerstone of our model and we attached a particular interest to reconstruct them with many details. In this model, the oceanic crust is preserved from one reconstruction to the next. The crustal material is symbolised by the synthetic isochrons from which we know the ages. We also reconstruct the margins (active or passive), ridges and intra-oceanic subductions. Using this detailed oceanic dataset, we developed unique 3-D bathymetric models offering a better precision than all the previously existing ones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The parameter setting of a differential evolution algorithm must meet several requirements: efficiency, effectiveness, and reliability. Problems vary. The solution of a particular problem can be represented in different ways. An algorithm most efficient in dealing with a particular representation may be less efficient in dealing with other representations. The development of differential evolution-based methods contributes substantially to research on evolutionary computing and global optimization in general. The objective of this study is to investigatethe differential evolution algorithm, the intelligent adjustment of its controlparameters, and its application. In the thesis, the differential evolution algorithm is first examined using different parameter settings and test functions. Fuzzy control is then employed to make control parameters adaptive based on an optimization process and expert knowledge. The developed algorithms are applied to training radial basis function networks for function approximation with possible variables including centers, widths, and weights of basis functions and both having control parameters kept fixed and adjusted by fuzzy controller. After the influence of control variables on the performance of the differential evolution algorithm was explored, an adaptive version of the differential evolution algorithm was developed and the differential evolution-based radial basis function network training approaches were proposed. Experimental results showed that the performance of the differential evolution algorithm is sensitive to parameter setting, and the best setting was found to be problem dependent. The fuzzy adaptive differential evolution algorithm releases the user load of parameter setting and performs better than those using all fixedparameters. Differential evolution-based approaches are effective for training Gaussian radial basis function networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Optimization methods allow designing changes in a system so that specific goals are attained. These techniques are fundamental for metabolic engineering. However, they are not directly applicable for investigating the evolution of metabolic adaptation to environmental changes. Although biological systems have evolved by natural selection and result in well-adapted systems, we can hardly expect that actual metabolic processes are at the theoretical optimum that could result from an optimization analysis. More likely, natural systems are to be found in a feasible region compatible with global physiological requirements. Results: We first present a new method for globally optimizing nonlinear models of metabolic pathways that are based on the Generalized Mass Action (GMA) representation. The optimization task is posed as a nonconvex nonlinear programming (NLP) problem that is solved by an outer- approximation algorithm. This method relies on solving iteratively reduced NLP slave subproblems and mixed-integer linear programming (MILP) master problems that provide valid upper and lower bounds, respectively, on the global solution to the original NLP. The capabilities of this method are illustrated through its application to the anaerobic fermentation pathway in Saccharomyces cerevisiae. We next introduce a method to identify the feasibility parametric regions that allow a system to meet a set of physiological constraints that can be represented in mathematical terms through algebraic equations. This technique is based on applying the outer-approximation based algorithm iteratively over a reduced search space in order to identify regions that contain feasible solutions to the problem and discard others in which no feasible solution exists. As an example, we characterize the feasible enzyme activity changes that are compatible with an appropriate adaptive response of yeast Saccharomyces cerevisiae to heat shock Conclusion: Our results show the utility of the suggested approach for investigating the evolution of adaptive responses to environmental changes. The proposed method can be used in other important applications such as the evaluation of parameter changes that are compatible with health and disease states.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Precision Viticulture (PV) is a concept that is beginning to have an impact on the wine-growing sector. Its practical implementation is dependant on various technological developments: crop sensors and yield monitors, local and remote sensors, Global Positioning Systems (GPS), VRA (Variable-Rate Application) equipment and machinery, Geographic Information Systems (GIS) and systems for data analysis and interpretation. This paper reviews a number of research lines related to PV. These areas of research have focused on four very specific fields: 1) quantification and evaluation of within-field variability, 2) delineation of zones of differential treatment at parcel level, based on the analysis and interpretation of this variability, 3) development of Variable-Rate Technologies (VRT) and, finally, 4) evaluation of the opportunities for site-specific vineyard management. Research in these fields should allow winegrowers and enologists to know and understand why yield variability exists within the same parcel, what the causes of this variability are, how the yield and its quality are interrelated and, if spatial variability exists, whether site-specific vineyard management is justifiable on a technical and economic basis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La présente thèse s'intitule "Développent et Application des Méthodologies Computationnelles pour la Modélisation Qualitative". Elle comprend tous les différents projets que j'ai entrepris en tant que doctorante. Plutôt qu'une mise en oeuvre systématique d'un cadre défini a priori, cette thèse devrait être considérée comme une exploration des méthodes qui peuvent nous aider à déduire le plan de processus regulatoires et de signalisation. Cette exploration a été mue par des questions biologiques concrètes, plutôt que par des investigations théoriques. Bien que tous les projets aient inclus des systèmes divergents (réseaux régulateurs de gènes du cycle cellulaire, réseaux de signalisation de cellules pulmonaires) ainsi que des organismes (levure à fission, levure bourgeonnante, rat, humain), nos objectifs étaient complémentaires et cohérents. Le projet principal de la thèse est la modélisation du réseau de l'initiation de septation (SIN) du S.pombe. La cytokinèse dans la levure à fission est contrôlée par le SIN, un réseau signalant de protéines kinases qui utilise le corps à pôle-fuseau comme échafaudage. Afin de décrire le comportement qualitatif du système et prédire des comportements mutants inconnus, nous avons décidé d'adopter l'approche de la modélisation booléenne. Dans cette thèse, nous présentons la construction d'un modèle booléen étendu du SIN, comprenant la plupart des composantes et des régulateurs du SIN en tant que noeuds individuels et testable expérimentalement. Ce modèle utilise des niveaux d'activité du CDK comme noeuds de contrôle pour la simulation d'évènements du SIN à différents stades du cycle cellulaire. Ce modèle a été optimisé en utilisant des expériences d'un seul "knock-out" avec des effets phénotypiques connus comme set d'entraînement. Il a permis de prédire correctement un set d'évaluation de "knock-out" doubles. De plus, le modèle a fait des prédictions in silico qui ont été validées in vivo, permettant d'obtenir de nouvelles idées de la régulation et l'organisation hiérarchique du SIN. Un autre projet concernant le cycle cellulaire qui fait partie de cette thèse a été la construction d'un modèle qualitatif et minimal de la réciprocité des cyclines dans la S.cerevisiae. Les protéines Clb dans la levure bourgeonnante présentent une activation et une dégradation caractéristique et séquentielle durant le cycle cellulaire, qu'on appelle communément les vagues des Clbs. Cet évènement est coordonné avec la courbe d'activation inverse du Sic1, qui a un rôle inhibitoire dans le système. Pour l'identification des modèles qualitatifs minimaux qui peuvent expliquer ce phénomène, nous avons sélectionné des expériences bien définies et construit tous les modèles minimaux possibles qui, une fois simulés, reproduisent les résultats attendus. Les modèles ont été filtrés en utilisant des simulations ODE qualitatives et standardisées; seules celles qui reproduisaient le phénotype des vagues ont été gardées. L'ensemble des modèles minimaux peut être utilisé pour suggérer des relations regulatoires entre les molécules participant qui peuvent ensuite être testées expérimentalement. Enfin, durant mon doctorat, j'ai participé au SBV Improver Challenge. Le but était de déduire des réseaux spécifiques à des espèces (humain et rat) en utilisant des données de phosphoprotéines, d'expressions des gènes et des cytokines, ainsi qu'un réseau de référence, qui était mis à disposition comme donnée préalable. Notre solution pour ce concours a pris la troisième place. L'approche utilisée est expliquée en détail dans le dernier chapitre de la thèse. -- The present dissertation is entitled "Development and Application of Computational Methodologies in Qualitative Modeling". It encompasses the diverse projects that were undertaken during my time as a PhD student. Instead of a systematic implementation of a framework defined a priori, this thesis should be considered as an exploration of the methods that can help us infer the blueprint of regulatory and signaling processes. This exploration was driven by concrete biological questions, rather than theoretical investigation. Even though the projects involved divergent systems (gene regulatory networks of cell cycle, signaling networks in lung cells), as well as organisms (fission yeast, budding yeast, rat, human), our goals were complementary and coherent. The main project of the thesis is the modeling of the Septation Initiation Network (SIN) in S.pombe. Cytokinesis in fission yeast is controlled by the SIN, a protein kinase signaling network that uses the spindle pole body as scaffold. In order to describe the qualitative behavior of the system and predict unknown mutant behaviors we decided to adopt a Boolean modeling approach. In this thesis, we report the construction of an extended, Boolean model of the SIN, comprising most SIN components and regulators as individual, experimentally testable nodes. The model uses CDK activity levels as control nodes for the simulation of SIN related events in different stages of the cell cycle. The model was optimized using single knock-out experiments of known phenotypic effect as a training set, and was able to correctly predict a double knock-out test set. Moreover, the model has made in silico predictions that have been validated in vivo, providing new insights into the regulation and hierarchical organization of the SIN. Another cell cycle related project that is part of this thesis was to create a qualitative, minimal model of cyclin interplay in S.cerevisiae. CLB proteins in budding yeast present a characteristic, sequential activation and decay during the cell cycle, commonly referred to as Clb waves. This event is coordinated with the inverse activation curve of Sic1, which has an inhibitory role in the system. To generate minimal qualitative models that can explain this phenomenon, we selected well-defined experiments and constructed all possible minimal models that, when simulated, reproduce the expected results. The models were filtered using standardized qualitative ODE simulations; only the ones reproducing the wave-like phenotype were kept. The set of minimal models can be used to suggest regulatory relations among the participating molecules, which will subsequently be tested experimentally. Finally, during my PhD I participated in the SBV Improver Challenge. The goal was to infer species-specific (human and rat) networks, using phosphoprotein, gene expression and cytokine data and a reference network provided as prior knowledge. Our solution to the challenge was selected as in the final chapter of the thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results: Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions: Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Zonal management in vineyards requires the prior delineation of stable yield zones within the parcel. Among the different methodologies used for zone delineation, cluster analysis of yield data from several years is one of the possibilities cited in scientific literature. However, there exist reasonable doubts concerning the cluster algorithm to be used and the number of zones that have to be delineated within a field. In this paper two different cluster algorithms have been compared (k-means and fuzzy c-means) using the grape yield data corresponding to three successive years (2002, 2003 and 2004), for a ‘Pinot Noir’ vineyard parcel. Final choice of the most recommendable algorithm has been linked to obtaining a stable pattern of spatial yield distribution and to allowing for the delineation of compact and average sized areas. The general recommendation is to use reclassified maps of two clusters or yield classes (low yield zone and high yield zone) and, consequently, the site-specific vineyard management should be based on the prior delineation of just two different zones or sub-parcels. The two tested algorithms are good options for this purpose. However, the fuzzy c-means algorithm allows for a better zoning of the parcel, forming more compact areas and with more equilibrated zonal differences over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tulevaisuudessa telekommunikaatioala tulee keskittymään pitkälti langattomiin sovelluksiin ja lisäarvopalveluihin. Tuottaakseen näitä palveluja alan yritykset tekevät yhteistyötä laajan kehittäjäjoukon kanssa. Työn tavoitteena oli parantaa case-yrityksen jo olemassaolevaa toimintamallia, jota se soveltaa yhteistyössään kehittäjien kanssa. Tutkimus keskittyy mobiiliapplikaatiokehittäjiin. Toimintamalli kattaa pääasiassa palvelutarjonnan kehittäjä-allianssissa.Jotta toimintamalliin pystyttäisiin tekemään strategisia muutoksia, oli aluksi tärkeä tunnistaa kehittäjien tarpeet ja toiseksi tarkkailla ja analysoida ympäristöä ja sillä tavoin tunnistaa pääkilpailijat ja heidän tarjontansa mobiiliapplikaatiokehittäjille. Tutkimus toteutettiin suorittamalla postikysely kehittäjille ja toisaalta tekemällä laadullinen tutkimus kilpailijoista. Kilpailutilanteen luonne ja potentiaaliset kilpailijat olivat tunnistettavissa. Parannusehdotukset sisälsivät sekä yleisiä että palvelukohtaisia parannuksia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Glioblastoma (GBM) is the most common and most aggressive malignant primary brain tumour. Despite the aggressiveness of the applied therapy, the prognosis remains poor with a median survival to of about 15 months. It is important to identify new candidate genes that could have clinical application in this disease. Previous gene expression studies from human GBM samples in our laboratory, revealed Ubiquitin Specific Peptidase 15 (USP15) as a gene with low expression, significantly associated with genomic deletions of the chromosomal region encompassing the USP15 locus. USP15 belongs to the ubiquitin-specific protease (USPs) family of which the main role is the reversion of ubiquitination and thereby stabilization of substrates. Previously, USP15 has been suggested to have a tumour suppressor function via its substrates APC and Caspase 3. We established GBM cell lines that stably express USP15 wt or its catalytic mutant. USP15 expression impairs cell growth by inhibiting cell cycle progression. On the other hand USP15 depletion in GBM cell lines induces cell cycle progression and proliferation. In order to identify the molecular pathways in which USP15 is implicated we aimed to identify protein-binding partners in the GBM cell line LN-229 by Mass spectrometry. As a result we identified eight new proteins that interact with USP15. These proteins are involved in important cellular processes like cytokinesis, cell cycle, cellular migration, and apoptosis. Three of these protein interactions were confirmed by co-immunoprecipitation in four GBM cell lines LN-229, LN428, LN18, LN-Z308. One of the binding proteins is HECTD1 E3 ligase of which the murine homologue promotes the APC-Axin interaction to negatively regulate the Wnt pathway. USP15 can de-ubiquitinate HECTD1 in the LN229 cell line while its depletion led to decrease of HECTD1 in GBM cell lines suggesting stabilizing role for USP15. Moreover, HECTD1 stable expression in LN229 inhibits cell cycle, while its depletion induces cell cycle progression. These results suggest that the USP15-HECTD1 interaction might enhance the antiproliferative effect of HECTD1 in GBM cell lines. Using the TOPflash/FOPflash luciferase system we showed that HECTD1 and USP15 overexpression can attenuate WNT pathway activity, and decrease the Axin2 expression. These data indicate that this new protein interaction of USP15 with HECTD1 results in negative regulation of the WNT pathway in GBM cell lines. Further investigation of the regulation of this interaction or of the protein binding network of HECTD1 in GBM may allow the discovery of new therapeutic targets. Finally PTPIP51 and KIF15 are the other two identified protein partners of USP15. These two proteins are involved in cell proliferation and their depletion in LN-229 cell line led to induction of cell cycle progression. USP15 displays a stabilizing role for them. Hence, these results show that the tumour suppressive role of USP15 in GBM cell line via different molecular mechanisms indicating the multidimensional function of USP15. Résumé Le glioblastome (GBM) est la tumeur primaire la plus fréquente et la plus agressive du cervau caractérisée par une survie médiane d'environ à 15 mois. De précédant travaux effectués au sein de notre laboratoire portant sur l'étude de l'expression de gènes pour des échantillons humains de GBM ont montré que le gène Ubiquitin Specific Peptidase 15 (USP1S) était significativement associée à une délétion locales à 25% des cas. Initialement, les substrats protéiques APC et CaspaseS de USP15 ont conduit à considérer cette protéine comme un suppresseur de tumeur. USP15 appartient à la famille protèsse spécifique de l'ubiquitine (USPs) dont le rôle principal est la réversion de l'ubiquitination et la stabilisation de substrats. Par conséquent, nous avons établi des lignées de cellules de glioblastome qui expriment de manière stable USP15 ou bien son mutant catalytique. Ainsi, nous avons ainsi démontré que l'expression de l'USP15 empêche la croissance cellulaire en inhibant la progression du cycle cellulaire. Inversement, la suppression de l'expression du gène USP15 dans les lignées cellulaires de glioblastome induit la progression du cycle cellulaire et la prolifération. Afin d'identifier les voies moléculaires dans lesquelles sont impliquées USP15, nous avons cherché à identifier les partenaires de liaisons protéiques par spectrométrie de masse dans la lignée cellulaire LN-229. Ainsi, huit nouvelles protéines interagissant avec USP15 ont été identifiées dont la ligase E3 HECTD1. L'homologue murin de Hectdl favorise l'interaction APC-Axin en régulant négativement la voie de signalisation de Wnt. USP15 interagit en désubiquitinant HECTD1 dans la lignée cellulaire LN-229 et provoque ainsi l'atténuation de l'activité de cette voie de signalisation. En conclusion, HECTD1, en interagissant avec USP15, joue un rôle de suppresseur de tumeur dans les lignées cellulaire de GBM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of contrast media in post-mortem radiology differs from clinical approaches in living patients. Post-mortem changes in the vascular system and the absence of blood flow lead to specific problems that have to be considered for the performance of post-mortem angiography. In addition, interpreting the images is challenging due to technique-related and post-mortem artefacts that have to be known and that are specific for each applied technique. Although the idea of injecting contrast media is old, classic methods are not simply transferable to modern radiological techniques in forensic medicine, as they are mostly dedicated to single-organ studies or applicable only shortly after death. With the introduction of modern imaging techniques, such as post-mortem computed tomography (PMCT) and post-mortem magnetic resonance (PMMR), to forensic death investigations, intensive research started to explore their advantages and limitations compared to conventional autopsy. PMCT has already become a routine investigation in several centres, and different techniques have been developed to better visualise the vascular system and organ parenchyma in PMCT. In contrast, the use of PMMR is still limited due to practical issues, and research is now starting in the field of PMMR angiography. This article gives an overview of the problems in post-mortem contrast media application, the various classic and modern techniques, and the issues to consider by using different media.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent advances in machine learning methods enable increasingly the automatic construction of various types of computer assisted methods that have been difficult or laborious to program by human experts. The tasks for which this kind of tools are needed arise in many areas, here especially in the fields of bioinformatics and natural language processing. The machine learning methods may not work satisfactorily if they are not appropriately tailored to the task in question. However, their learning performance can often be improved by taking advantage of deeper insight of the application domain or the learning problem at hand. This thesis considers developing kernel-based learning algorithms incorporating this kind of prior knowledge of the task in question in an advantageous way. Moreover, computationally efficient algorithms for training the learning machines for specific tasks are presented. In the context of kernel-based learning methods, the incorporation of prior knowledge is often done by designing appropriate kernel functions. Another well-known way is to develop cost functions that fit to the task under consideration. For disambiguation tasks in natural language, we develop kernel functions that take account of the positional information and the mutual similarities of words. It is shown that the use of this information significantly improves the disambiguation performance of the learning machine. Further, we design a new cost function that is better suitable for the task of information retrieval and for more general ranking problems than the cost functions designed for regression and classification. We also consider other applications of the kernel-based learning algorithms such as text categorization, and pattern recognition in differential display. We develop computationally efficient algorithms for training the considered learning machines with the proposed kernel functions. We also design a fast cross-validation algorithm for regularized least-squares type of learning algorithm. Further, an efficient version of the regularized least-squares algorithm that can be used together with the new cost function for preference learning and ranking tasks is proposed. In summary, we demonstrate that the incorporation of prior knowledge is possible and beneficial, and novel advanced kernels and cost functions can be used in algorithms efficiently.