960 resultados para multiple-try Metropolis algorithm
Resumo:
La sclérose en plaques (SEP) est une maladie démyélinisante du système nerveux central (SNC) provoquant des pertes motrices, sensitives et cognitives. La SEP se déclare chez le jeune adulte ayant des prédispositions génétiques, mais semble induite, par des facteurs environnementaux. La SEP touche principalement les femmes et sa prévalence dans les zones à haut risque, tel que la Suisse, est de 0.1%. Bien que son étiologie exacte reste méconnue, nous savons que la maladie est médiée par des lymphocytes T autoréactifs périphériques, qui infiltrent le SNC où ils activent d'autres cellules immunitaires ainsi que les cellules du SNC elles-mêmes, créant un foyer inflammatoire, qui va attaquer et finir par tuer les oligodendrocytes et les neurones. Les épisodes inflammatoires sont entrecoupés par des phases de rémission associées à une guérison partielle des lésions. Cette première phase de la maladie, comprenant des épisodes inflammatoires et de rémissions est appelé SEP récurrente-rémittente (SEP-RR) et touche 90% des patients. Elle évolue, dans deux-tiers des cas, vers une SEP secondaire progressive (SEP-SP), qui est caractérisée par une progression constante de la maladie, associée à une réduction de l'inflammation mais une augmentation de la neurodégénérescence. Les patients souffrants de SEP primaire progressive (SEP-PP) développent directement les symptômes de la phase progressive de la maladie. Les thérapies disponibles ont considérablement amélioré l'évolution de la maladie des patients SEP-RR, en agissant sur une diminution de la réponse immunitaire et donc de l'inflammation. Cependant, ces traitements sont inefficaces chez les patients SEP-SP et SEP-PP, n'agissant pas sur la neurodégénérescence. IL-22, une cytokine sécrétée notoirement par les cellules Th17, a été associée à la SEP en contribuant à la perméabilisation de la barrière hémato-encéphalique et à l'inflammation du SNC, qui sont des étapes clés de la pathogenèse de la maladie. En outre, le gène codant pour un inhibiteur puissant d'IL- 22, 'IL-22 binding protein' (IL-22BP), a été démontré comme un facteur de risque de la SEP. Ces indices nous ont poussés à nous intéresser de plus près au rôle de l'IL-22 dans la SEP. Nous avons pu montrer qu'IL-22 et IL-22BP étaient augmentées dans le sang des patients SEP par rapport à des sujets sains. Nous avons trouvé qu'IL-22 cible spécifiquement les astrocytes dans le SNC et que son récepteur est particulièrement exprimé dans les lésions des patient SEP. Contre toute attente, nous avons pu montrer que l'IL-22 semble soutenir la survie des astrocytes. Cette découverte, suggérant qu'IL-22 serait protecteur pour le SNC et pour la SEP, confirme de récentes publications et ouvre la voie à de potentielles applications thérapeutiques. En parallèle, dans le but de mieux comprendre l'immunopathogenèse de la SEP, nous avons développé les techniques de culture de cellules souches pluripotentes induites (iPSC). Nos iPSC sont dérivées du sang des donneurs et acquièrent toutes les propriétés des cellules souches embryonnaires après induction. Les iPSC peuvent ensuite être différenciées en différents types de cellules, dont les cellules du SNC. Nous avons ainsi pu obtenir avec succès des neurones, dérivés de cellules du sang, en passant par le stade des iPSC. La prochaine étape consiste à générer des cultures d'astrocytes et d'oligodendrocytes et ainsi obtenir les principales cellules du SNC, le but étant de former de véritables 'cerveaux-en-culture'. Cet outil semble particulièrement adapté à l'étude de l'activité de diverses molécules sur les cellules du SNC, comme par exemple l'IL-22 et d'autres molécules ayant un potentiel intérêt thérapeutique au niveau du SNC. Le but ultime étant de développer des co-cultures de cellules du SNC avec des cellules immunitaires autologues, de patients SEP et de sujets sains, afin de mettre en évidence l'attaque des cellules du SNC par des leucocytes autoréactifs. Ce projet prospectif a permis d'accroître nos connaissance sur des aspects immunitaires de la SEP et à pour but de mieux comprendre l'immunopathogenèse de la SEP afin d'élaborer de nouvelles stratégies thérapeutiques. -- La sclérose en plaques est une maladie auto-inflammatoire du système nerveux central conduisant à la destruction de la myéline, indispensable à la conduction nerveuse, et finalement à la mort des neurones eux-mêmes. Cela a pour conséquence des pertes motrices, sensorielles et cognitives, qui ont tendance à s'aggraver au fil de la maladie. Elle se déclare chez le jeune adulte, entre l'âge de 20 et 40 ans, et prédomine chez la femme. En Suisse, environ une personne sur l'OOO est atteinte de sclérose en plaques. Les causes exactes de cette maladie, qui incluent des facteurs génétiques et environnementaux, sont encore mal connues. Des traitements de plus en plus efficaces ont été développés ces dernières années et ont permis de drastiquement améliorer l'évolution de la maladie chez les patients atteints de sclérose en plaques. Cependant, ces traitements ne sont efficaces que sur certaines catégories de patients et peuvent engendrer de lourds effets secondaires. Ces thérapies agissent presque exclusivement sur les cellules du système immunitaire en les désactivant partiellement, mais pas sur les cellules nerveuses, qui sont pourtant celles qui conditionnent le devenir du patient. Le développement de médicaments protégeant ou permettant la régénération des cellules du système nerveux central est donc primordial. L'étude de l'interleukine-22 nous a permis de montrer que cette cytokine ('hormone' du système immunitaire) pouvait cibler spécifiquement les astrocytes, des cellules gliales qui jouent un rôle central dans le maintien de l'équilibre du système nerveux central. Nos recherches ont montré que cette interleukine-22 permettrait une meilleure survie des astrocytes durant la phase aiguë de la maladie et aurait aussi des propriétés neuroprotectrices. En parallèle, nous sommes en train de développer un nouveau modèle in vitro d'étude de la sclérose en plaques grâce à la technologie des cellules souches pluripotentes induites. Ces cellules souches sont induites à partir de cellules du sang du donneur et acquièrent toutes les caractéristiques des cellules souches embryonnaires présentes dans un organisme en formation. Ainsi, ces cellules souches pluripotentes ont, par exemple, la capacité de se différencier en cellules du système nerveux central. Nous avons pu, de cette manière, obtenir des neurones. Le but ultime serait de pouvoir reconstituer une ébauche de cerveau in vitro, en cultivant ensemble différents types de cellules du système nerveux central, afin d'y réaliser des expériences avec des cellules immunitaires du même donneur. Ces travaux ont pour but d'améliorer notre compréhension de la pathogenèse de la sclérose en plaques et de permettre le développement de nouvelles stratégies thérapeutiques. --Multiple sclerosis (MS) is a demyelinating disease of the central nervous system leading to cognitive, sensitive and motor disabilities. MS occurs in genetically predisposed young adults with probable environmental triggers. MS affects predominantly women and its prevalence in high risk area such as Switzerland is 0.1%. Though its exact aetiology remains undetermined, we know that autoreactive T cells from de periphery are reactivated and recruited into the central nervous system (CNS) were they further activate other immune cells and resident cells, creating inflammatory foci, where oligodendrocytes and neurons are insulted and, eventually, killed. Inflammatory episodes, called relapses, are interspersed with remission phases where partial recovery of the lesions occurs. This first phase of the disease, occurring in 90% of the patients, is called relapsing-remitting MS (RR-MS) and is leading, in two-third of the cases, to secondary-progressive MS (SP-MS), where there is a continuous steady progression of the disease, associated with reduced inflammation but increased neurodegeneration. Primary-progressive MS (PP-MS) patients experience directly this progressive phase of the disease. Whereas disease modifying therapies have dramatically ameliorated the disease course of RR-MS patients by dampening immunity and, in turn, inflammation, treatments of SP-MS and PP-MS patients, who suffer primarily from the neurodegenerative aspect of the disease, are still inexistent. IL-22, a pro-inflammatory Th17 cell cytokine, has been associated with MS by participating to blood-brain barrier infiltration and CNS inflammation, which are crucial steps in MS pathogenesis. In addition, the gene coding for IL-22 binding protein (IL-22BP), which is a potent secreted IL-22 inhibitor, has been associated with MS risk. These findings call for further investigation on the role of IL-22 in MS. We detected increased IL-22 and IL-22BP in the blood of MS patients as compared to healthy controls. Acting exclusively on cells of nonhematopoietic origin, we found that IL-22 targets specifically astrocytes in the CNS and that its receptor is highly expressed in the lesion of MS patients. Unexpectedly, we found that IL-22 seems to promote survival of astrocytes. This finding, suggesting that IL-22 might be protective for the CNS in the context of MS, is consistent with recent publications and might open putative therapeutic applications at the CNS level. In parallel, with the aim of better understanding the immunopathogenesis of MS, we developed induced pluripotent stem cell (iPSC) techniques. IPSC are derived from blood cells of the donors and bear embryonic stem cell properties. IPSC can be differentiated into various cell types including CNS cells. We successfully obtained neurons derived from the donor blood cells, through iPSC. We further aim at developing astrocytes and oligodendrocytes cultures to recreate a 'brain-in-a-dish'. This would be a powerful tool to test the activity of various compounds on CNS cells, including IL-22 and other putative neuroprotective drugs. Ultimately, the goal is to develop co-cultures of CNS cells with autologous immune cells of MS patients as well as healthy controls to try to expose evidence of CNS cells targeted by autoreactive leukocytes. This prospective project has increased our knowledge of immune aspects of MS and further aims at better understanding the immunopathology of MS in order to pave the way to the elaboration of new therapeutic strategies.
Resumo:
Background Virtual reality (VR) simulation is increasingly used in surgical disciplines. Since VR simulators measure multiple outcomes, standardized reporting is needed. Methods We present an algorithm for combining multiple VR outcomes into dimension summary measures, which are then integrated into a meaningful total score. We reanalyzed the data of two VR studies applying the algorithm. Results The proposed algorithm was successfully applied to both VR studies. Conclusions The algorithm contributes to standardized and transparent reporting in VR-related research.
Resumo:
The loss of brain volume has been used as a marker of tissue destruction and can be used as an index of the progression of neurodegenerative diseases, such as multiple sclerosis. In the present study, we tested a new method for tissue segmentation based on pixel intensity threshold using generalized Tsallis entropy to determine a statistical segmentation parameter for each single class of brain tissue. We compared the performance of this method using a range of different q parameters and found a different optimal q parameter for white matter, gray matter, and cerebrospinal fluid. Our results support the conclusion that the differences in structural correlations and scale invariant similarities present in each tissue class can be accessed by generalized Tsallis entropy, obtaining the intensity limits for these tissue class separations. In order to test this method, we used it for analysis of brain magnetic resonance images of 43 patients and 10 healthy controls matched for gender and age. The values found for the entropic q index were 0.2 for cerebrospinal fluid, 0.1 for white matter and 1.5 for gray matter. With this algorithm, we could detect an annual loss of 0.98% for the patients, in agreement with literature data. Thus, we can conclude that the entropy of Tsallis adds advantages to the process of automatic target segmentation of tissue classes, which had not been demonstrated previously.
Resumo:
Ordered gene problems are a very common classification of optimization problems. Because of their popularity countless algorithms have been developed in an attempt to find high quality solutions to the problems. It is also common to see many different types of problems reduced to ordered gene style problems as there are many popular heuristics and metaheuristics for them due to their popularity. Multiple ordered gene problems are studied, namely, the travelling salesman problem, bin packing problem, and graph colouring problem. In addition, two bioinformatics problems not traditionally seen as ordered gene problems are studied: DNA error correction and DNA fragment assembly. These problems are studied with multiple variations and combinations of heuristics and metaheuristics with two distinct types or representations. The majority of the algorithms are built around the Recentering- Restarting Genetic Algorithm. The algorithm variations were successful on all problems studied, and particularly for the two bioinformatics problems. For DNA Error Correction multiple cases were found with 100% of the codes being corrected. The algorithm variations were also able to beat all other state-of-the-art DNA Fragment Assemblers on 13 out of 16 benchmark problem instances.
Resumo:
In a recent paper, Bai and Perron (1998) considered theoretical issues related to the limiting distribution of estimators and test statistics in the linear model with multiple structural changes. In this companion paper, we consider practical issues for the empirical applications of the procedures. We first address the problem of estimation of the break dates and present an efficient algorithm to obtain global minimizers of the sum of squared residuals. This algorithm is based on the principle of dynamic programming and requires at most least-squares operations of order O(T 2) for any number of breaks. Our method can be applied to both pure and partial structural-change models. Secondly, we consider the problem of forming confidence intervals for the break dates under various hypotheses about the structure of the data and the errors across segments. Third, we address the issue of testing for structural changes under very general conditions on the data and the errors. Fourth, we address the issue of estimating the number of breaks. We present simulation results pertaining to the behavior of the estimators and tests in finite samples. Finally, a few empirical applications are presented to illustrate the usefulness of the procedures. All methods discussed are implemented in a GAUSS program available upon request for non-profit academic use.
Resumo:
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
Resumo:
Les titres financiers sont souvent modélisés par des équations différentielles stochastiques (ÉDS). Ces équations peuvent décrire le comportement de l'actif, et aussi parfois certains paramètres du modèle. Par exemple, le modèle de Heston (1993), qui s'inscrit dans la catégorie des modèles à volatilité stochastique, décrit le comportement de l'actif et de la variance de ce dernier. Le modèle de Heston est très intéressant puisqu'il admet des formules semi-analytiques pour certains produits dérivés, ainsi qu'un certain réalisme. Cependant, la plupart des algorithmes de simulation pour ce modèle font face à quelques problèmes lorsque la condition de Feller (1951) n'est pas respectée. Dans ce mémoire, nous introduisons trois nouveaux algorithmes de simulation pour le modèle de Heston. Ces nouveaux algorithmes visent à accélérer le célèbre algorithme de Broadie et Kaya (2006); pour ce faire, nous utiliserons, entre autres, des méthodes de Monte Carlo par chaînes de Markov (MCMC) et des approximations. Dans le premier algorithme, nous modifions la seconde étape de la méthode de Broadie et Kaya afin de l'accélérer. Alors, au lieu d'utiliser la méthode de Newton du second ordre et l'approche d'inversion, nous utilisons l'algorithme de Metropolis-Hastings (voir Hastings (1970)). Le second algorithme est une amélioration du premier. Au lieu d'utiliser la vraie densité de la variance intégrée, nous utilisons l'approximation de Smith (2007). Cette amélioration diminue la dimension de l'équation caractéristique et accélère l'algorithme. Notre dernier algorithme n'est pas basé sur une méthode MCMC. Cependant, nous essayons toujours d'accélérer la seconde étape de la méthode de Broadie et Kaya (2006). Afin de réussir ceci, nous utilisons une variable aléatoire gamma dont les moments sont appariés à la vraie variable aléatoire de la variance intégrée par rapport au temps. Selon Stewart et al. (2007), il est possible d'approximer une convolution de variables aléatoires gamma (qui ressemble beaucoup à la représentation donnée par Glasserman et Kim (2008) si le pas de temps est petit) par une simple variable aléatoire gamma.
Resumo:
This thesis Entitled “modelling and analysis of recurrent event data with multiple causes.Survival data is a term used for describing data that measures the time to occurrence of an event.In survival studies, the time to occurrence of an event is generally referred to as lifetime.Recurrent event data are commonly encountered in longitudinal studies when individuals are followed to observe the repeated occurrences of certain events. In many practical situations, individuals under study are exposed to the failure due to more than one causes and the eventual failure can be attributed to exactly one of these causes.The proposed model was useful in real life situations to study the effect of covariates on recurrences of certain events due to different causes.In Chapter 3, an additive hazards model for gap time distributions of recurrent event data with multiple causes was introduced. The parameter estimation and asymptotic properties were discussed .In Chapter 4, a shared frailty model for the analysis of bivariate competing risks data was presented and the estimation procedures for shared gamma frailty model, without covariates and with covariates, using EM algorithm were discussed. In Chapter 6, two nonparametric estimators for bivariate survivor function of paired recurrent event data were developed. The asymptotic properties of the estimators were studied. The proposed estimators were applied to a real life data set. Simulation studies were carried out to find the efficiency of the proposed estimators.
Resumo:
Assembly job shop scheduling problem (AJSP) is one of the most complicated combinatorial optimization problem that involves simultaneously scheduling the processing and assembly operations of complex structured products. The problem becomes even more complicated if a combination of two or more optimization criteria is considered. This thesis addresses an assembly job shop scheduling problem with multiple objectives. The objectives considered are to simultaneously minimizing makespan and total tardiness. In this thesis, two approaches viz., weighted approach and Pareto approach are used for solving the problem. However, it is quite difficult to achieve an optimal solution to this problem with traditional optimization approaches owing to the high computational complexity. Two metaheuristic techniques namely, genetic algorithm and tabu search are investigated in this thesis for solving the multiobjective assembly job shop scheduling problems. Three algorithms based on the two metaheuristic techniques for weighted approach and Pareto approach are proposed for the multi-objective assembly job shop scheduling problem (MOAJSP). A new pairing mechanism is developed for crossover operation in genetic algorithm which leads to improved solutions and faster convergence. The performances of the proposed algorithms are evaluated through a set of test problems and the results are reported. The results reveal that the proposed algorithms based on weighted approach are feasible and effective for solving MOAJSP instances according to the weight assigned to each objective criterion and the proposed algorithms based on Pareto approach are capable of producing a number of good Pareto optimal scheduling plans for MOAJSP instances.
Resumo:
Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.
Resumo:
This paper deals with the design of optimal multiple gravity assist trajectories with deep space manoeuvres. A pruning method which considers the sequential nature of the problem is presented. The method locates feasible vectors using local optimization and applies a clustering algorithm to find reduced bounding boxes which can be used in a subsequent optimization step. Since multiple local minima remain within the pruned search space, the use of a global optimization method, such as Differential Evolution, is suggested for finding solutions which are likely to be close to the global optimum. Two case studies are presented.
Resumo:
Most haptic environments are based on single point interactions whereas in practice, object manipulation requires multiple contact points between the object, fingers, thumb and palm. The Friction Cone Algorithm was developed specifically to work well in a multi-finger haptic environment where object manipulation would occur. However, the Friction Cone Algorithm has two shortcomings when applied to polygon meshes: there is no means of transitioning polygon boundaries or feeling non-convex edges. In order to overcome these deficiencies, Face Directed Connection Graphs have been developed as well as a robust method for applying friction to non-convex edges. Both these extensions are described herein, as well as the implementation issues associated with them.
Resumo:
In this work a method for building multiple-model structures is presented. A clustering algorithm that uses data from the system is employed to define the architecture of the multiple-model, including the size of the region covered by each model, and the number of models. A heating ventilation and air conditioning system is used as a testbed of the proposed method.
Resumo:
In this work a method for building multiple-model structures is presented. A clustering algorithm that uses data from the system is employed to define the architecture of the multiple-model, including the size of the region covered by each model, and the number of models. A heating ventilation and air conditioning system is used as a testbed of the proposed method.
Resumo:
We introduce and describe the Multiple Gravity Assist problem, a global optimisation problem that is of great interest in the design of spacecraft and their trajectories. We discuss its formalization and we show, in one particular problem instance, the performance of selected state of the art heuristic global optimisation algorithms. A deterministic search space pruning algorithm is then developed and its polynomial time and space complexity derived. The algorithm is shown to achieve search space reductions of greater than six orders of magnitude, thus reducing significantly the complexity of the subsequent optimisation.