817 resultados para rejection algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

An effective preservation method and decreased rejection are essential for tracheal transplantation in the reconstruction of large airway defects. Our objective in the present study was to evaluate the antigenic properties of glycerin-preserved tracheal segments. Sixty-one tracheal segments (2.4 to 3.1 cm) were divided into three groups: autograft (N = 21), fresh allograft (N = 18) and glycerin-preserved allograft (N = 22). Two segments from different groups were implanted into the greater omentum of dogs (N = 31). After 28 days, the segments were harvested and analyzed for mononuclear infiltration score and for the presence of respiratory epithelium. The fresh allograft group presented the highest score for mononuclear infiltration (1.78 ± 0.43, P <= 0.001) when compared to the autograft and glycerin-preserved allograft groups. In contrast to the regenerated epithelium observed in autograft segments, all fresh allografts and glycerin-preserved allografts had desquamation of the respiratory mucosa. The low antigenicity observed in glycerin segments was probably the result of denudation of the respiratory epithelium and perhaps due to the decrease of major histocompatibility complex class II antigens.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Acute rejection of a transplanted organ is characterized by intense inflammation within the graft. Yet, for many years transplant researchers have overlooked the role of classic mediators of inflammation such as prostaglandins and thromboxane (prostanoids) in alloimmune responses. It has been demonstrated that local production of prostanoids within the allograft is increased during an episode of acute rejection and that these molecules are able to interfere with graft function by modulating vascular tone, capillary permeability, and platelet aggregation. Experimental data also suggest that prostanoids may participate in alloimmune responses by directly modulating T lymphocyte and antigen-presenting cell function. In the present paper, we provide a brief overview of the alloimmune response, of prostanoid biology, and discuss the available evidence for the role of prostaglandin E2 and thromboxane A2 in graft rejection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Prompt and accurate detection of rejection prior to pathological changes after organ transplantation is vital for monitoring rejections. Although biopsy remains the current gold standard for rejection diagnosis, it is an invasive method and cannot be repeated daily. Thus, noninvasive monitoring methods are needed. In this study, by introducing an IL-2 neutralizing monoclonal antibody (IL-2 N-mAb) and immunosuppressants into the culture with the presence of specific stimulators and activated lymphocytes, an activated lymphocyte-specific assay (ALSA) system was established to detect the specific activated lymphocytes. This assay demonstrated that the suppression in the ALSA test was closely related to the existence of specific activated lymphocytes. The ALSA test was applied to 47 heart graft recipients and the proliferation of activated lymphocytes from all rejection recipients proven by endomyocardial biopsies was found to be inhibited by spleen cells from the corresponding donors, suggesting that this suppression could reflect the existence of activated lymphocytes against donor antigens, and thus the rejection of a heart graft. The sensitivity of the ALSA test in these 47 heart graft recipients was 100%; however, the specificity was only 37.5%. It was also demonstrated that IL-2 N-mAb was indispensible, and the proper culture time courses and concentrations of stimulators were essential for the ALSA test. This preliminary study with 47 grafts revealed that the ALSA test was a promising noninvasive tool, which could be used in vitro to assist with the diagnosis of rejection post-heart transplantation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work presents synopsis of efficient strategies used in power managements for achieving the most economical power and energy consumption in multicore systems, FPGA and NoC Platforms. In this work, a practical approach was taken, in an effort to validate the significance of the proposed Adaptive Power Management Algorithm (APMA), proposed for system developed, for this thesis project. This system comprise arithmetic and logic unit, up and down counters, adder, state machine and multiplexer. The essence of carrying this project firstly, is to develop a system that will be used for this power management project. Secondly, to perform area and power synopsis of the system on these various scalable technology platforms, UMC 90nm nanotechnology 1.2v, UMC 90nm nanotechnology 1.32v and UMC 0.18 μmNanotechnology 1.80v, in order to examine the difference in area and power consumption of the system on the platforms. Thirdly, to explore various strategies that can be used to reducing system’s power consumption and to propose an adaptive power management algorithm that can be used to reduce the power consumption of the system. The strategies introduced in this work comprise Dynamic Voltage Frequency Scaling (DVFS) and task parallelism. After the system development, it was run on FPGA board, basically NoC Platforms and on these various technology platforms UMC 90nm nanotechnology1.2v, UMC 90nm nanotechnology 1.32v and UMC180 nm nanotechnology 1.80v, the system synthesis was successfully accomplished, the simulated result analysis shows that the system meets all functional requirements, the power consumption and the area utilization were recorded and analyzed in chapter 7 of this work. This work extensively reviewed various strategies for managing power consumption which were quantitative research works by many researchers and companies, it's a mixture of study analysis and experimented lab works, it condensed and presents the whole basic concepts of power management strategy from quality technical papers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis introduces the Salmon Algorithm, a search meta-heuristic which can be used for a variety of combinatorial optimization problems. This algorithm is loosely based on the path finding behaviour of salmon swimming upstream to spawn. There are a number of tunable parameters in the algorithm, so experiments were conducted to find the optimum parameter settings for different search spaces. The algorithm was tested on one instance of the Traveling Salesman Problem and found to have superior performance to an Ant Colony Algorithm and a Genetic Algorithm. It was then tested on three coding theory problems - optimal edit codes, optimal Hamming distance codes, and optimal covering codes. The algorithm produced improvements on the best known values for five of six of the test cases using edit codes. It matched the best known results on four out of seven of the Hamming codes as well as three out of three of the covering codes. The results suggest the Salmon Algorithm is competitive with established guided random search techniques, and may be superior in some search spaces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding the machinery of gene regulation to control gene expression has been one of the main focuses of bioinformaticians for years. We use a multi-objective genetic algorithm to evolve a specialized version of side effect machines for degenerate motif discovery. We compare some suggested objectives for the motifs they find, test different multi-objective scoring schemes and probabilistic models for the background sequence models and report our results on a synthetic dataset and some biological benchmarking suites. We conclude with a comparison of our algorithm with some widely used motif discovery algorithms in the literature and suggest future directions for research in this area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

DNA assembly is among the most fundamental and difficult problems in bioinformatics. Near optimal assembly solutions are available for bacterial and small genomes, however assembling large and complex genomes especially the human genome using Next-Generation-Sequencing (NGS) technologies is shown to be very difficult because of the highly repetitive and complex nature of the human genome, short read lengths, uneven data coverage and tools that are not specifically built for human genomes. Moreover, many algorithms are not even scalable to human genome datasets containing hundreds of millions of short reads. The DNA assembly problem is usually divided into several subproblems including DNA data error detection and correction, contig creation, scaffolding and contigs orientation; each can be seen as a distinct research area. This thesis specifically focuses on creating contigs from the short reads and combining them with outputs from other tools in order to obtain better results. Three different assemblers including SOAPdenovo [Li09], Velvet [ZB08] and Meraculous [CHS+11] are selected for comparative purposes in this thesis. Obtained results show that this thesis’ work produces comparable results to other assemblers and combining our contigs to outputs from other tools, produces the best results outperforming all other investigated assemblers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ordered gene problems are a very common classification of optimization problems. Because of their popularity countless algorithms have been developed in an attempt to find high quality solutions to the problems. It is also common to see many different types of problems reduced to ordered gene style problems as there are many popular heuristics and metaheuristics for them due to their popularity. Multiple ordered gene problems are studied, namely, the travelling salesman problem, bin packing problem, and graph colouring problem. In addition, two bioinformatics problems not traditionally seen as ordered gene problems are studied: DNA error correction and DNA fragment assembly. These problems are studied with multiple variations and combinations of heuristics and metaheuristics with two distinct types or representations. The majority of the algorithms are built around the Recentering- Restarting Genetic Algorithm. The algorithm variations were successful on all problems studied, and particularly for the two bioinformatics problems. For DNA Error Correction multiple cases were found with 100% of the codes being corrected. The algorithm variations were also able to beat all other state-of-the-art DNA Fragment Assemblers on 13 out of 16 benchmark problem instances.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding the relationship between genetic diseases and the genes associated with them is an important problem regarding human health. The vast amount of data created from a large number of high-throughput experiments performed in the last few years has resulted in an unprecedented growth in computational methods to tackle the disease gene association problem. Nowadays, it is clear that a genetic disease is not a consequence of a defect in a single gene. Instead, the disease phenotype is a reflection of various genetic components interacting in a complex network. In fact, genetic diseases, like any other phenotype, occur as a result of various genes working in sync with each other in a single or several biological module(s). Using a genetic algorithm, our method tries to evolve communities containing the set of potential disease genes likely to be involved in a given genetic disease. Having a set of known disease genes, we first obtain a protein-protein interaction (PPI) network containing all the known disease genes. All the other genes inside the procured PPI network are then considered as candidate disease genes as they lie in the vicinity of the known disease genes in the network. Our method attempts to find communities of potential disease genes strongly working with one another and with the set of known disease genes. As a proof of concept, we tested our approach on 16 breast cancer genes and 15 Parkinson's Disease genes. We obtained comparable or better results than CIPHER, ENDEAVOUR and GPEC, three of the most reliable and frequently used disease-gene ranking frameworks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis we are going to analyze the dictionary graphs and some other kinds of graphs using the PagerRank algorithm. We calculated the correlation between the degree and PageRank of all nodes for a graph obtained from Merriam-Webster dictionary, a French dictionary and WordNet hypernym and synonym dictionaries. Our conclusion was that PageRank can be a good tool to compare the quality of dictionaries. We studied some artificial social and random graphs. We found that when we omitted some random nodes from each of the graphs, we have not noticed any significant changes in the ranking of the nodes according to their PageRank. We also discovered that some social graphs selected for our study were less resistant to the changes of PageRank.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cette thèse porte sur les problèmes de tournées de véhicules avec fenêtres de temps où un gain est associé à chaque client et où l'objectif est de maximiser la somme des gains recueillis moins les coûts de transport. De plus, un même véhicule peut effectuer plusieurs tournées durant l'horizon de planification. Ce problème a été relativement peu étudié en dépit de son importance en pratique. Par exemple, dans le domaine de la livraison de denrées périssables, plusieurs tournées de courte durée doivent être combinées afin de former des journées complètes de travail. Nous croyons que ce type de problème aura une importance de plus en plus grande dans le futur avec l'avènement du commerce électronique, comme les épiceries électroniques, où les clients peuvent commander des produits par internet pour la livraison à domicile. Dans le premier chapitre de cette thèse, nous présentons d'abord une revue de la littérature consacrée aux problèmes de tournées de véhicules avec gains ainsi qu'aux problèmes permettant une réutilisation des véhicules. Nous présentons les méthodologies générales adoptées pour les résoudre, soit les méthodes exactes, les méthodes heuristiques et les méta-heuristiques. Nous discutons enfin des problèmes de tournées dynamiques où certaines données sur le problème ne sont pas connues à l'avance. Dans le second chapitre, nous décrivons un algorithme exact pour résoudre un problème de tournées avec fenêtres de temps et réutilisation de véhicules où l'objectif premier est de maximiser le nombre de clients desservis. Pour ce faire, le problème est modélisé comme un problème de tournées avec gains. L'algorithme exact est basé sur une méthode de génération de colonnes couplée avec un algorithme de plus court chemin élémentaire avec contraintes de ressources. Pour résoudre des instances de taille réaliste dans des temps de calcul raisonnables, une approche de résolution de nature heuristique est requise. Le troisième chapitre propose donc une méthode de recherche adaptative à grand voisinage qui exploite les différents niveaux hiérarchiques du problème (soit les journées complètes de travail des véhicules, les routes qui composent ces journées et les clients qui composent les routes). Dans le quatrième chapitre, qui traite du cas dynamique, une stratégie d'acceptation et de refus des nouvelles requêtes de service est proposée, basée sur une anticipation des requêtes à venir. L'approche repose sur la génération de scénarios pour différentes réalisations possibles des requêtes futures. Le coût d'opportunité de servir une nouvelle requête est basé sur une évaluation des scénarios avec et sans cette nouvelle requête. Enfin, le dernier chapitre résume les contributions de cette thèse et propose quelques avenues de recherche future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider envy-free (and budget-balanced) rules that are least manipulable with respect to agents counting or with respect to utility gains. Recently it has been shown that for any profile of quasi-linear preferences, the outcome of any such least manipulable envy-free rule can be obtained via agent-k-linked allocations. This note provides an algorithm for identifying agent-k-linked allocations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ce mémoire examine la question de la formation de l'identité en tant que procédure compliquée dans laquelle plusieurs éléments interviennent. L'identité d'une personne se compose à la fois d’une identité propre et d’une autre collective. Dans le cas où l’identité propre est jugée sévèrement par les autres comme étant déviante, cela poussera la personne à, ou bien maintenir une image compatible avec les prototypes sociaux ou bien résister et affirmer son identité personnelle. Mon travail montre que l'exclusion et la répression de certains aspects de l'identité peuvent causer un disfonctionnement psychique difficile à surmonter. Par contre, l'acceptation de soi et l’adoption de tous les éléments qui la constituent conduisent, certes après une longue lutte, au salut de l’âme et du corps. Le premier chapitre propose une approche psychosociale qui vise à expliquer le fonctionnement des groupes et comment l'interaction avec autrui joue un rôle décisif dans la formation de l'identité. Des éléments extérieurs comme par exemple les idéaux sociaux influencent les comportements et les choix des gens. Toutefois, cette influence peut devenir une menace aux spécificités personnelles et aux traits spécifiques. Le deuxième chapitre examine la question des problèmes qu’on risque d’avoir au cas où les traits identitaires franchiraient les normes sociales. Nous partons du problème épineux de la quête de soi dans Giovanni's Room de James Baldwin. L'homosexualité de David était tellement refusée par la société qu’elle a engendrée chez lui des sentiments de honte et de culpabilité. Il devait choisir entre le sacrifice des aspects de soi pour satisfaire les paradigmes sociaux ou bien perdre ce qu’il a de propre. David n'arrive pas à se libérer. Il reste prisonnier des perceptions rigides au sujet de la masculinité et de la sexualité. Mon analyse se focalise essentiellement sur l'examen des différents éléments théoriques qui touchent la question du sexe et de la sexualité. Le résultat est le suivant : plus les opinions dominantes sont rigides et fermes, plus elles deviennent une prison pour l’individu. Par contre, plus elles sont tolérantes et flexibles, plus elles acceptent les diversités de l'identité humaine. Dans le dernier chapitre, j'examine la question de la représentation des relations entre les caractères masculins dans Just Above My Head. L'homosexualité est présentée comme un moyen sacré pour exprimer l'amour. Les caractères révèlent leurs sentiments implicitement à travers les chants spirituel tel que le gospel ou bien explicitement à travers la connexion physique. Dans ce roman, Baldwin montre que c'est seulement grâce à la sincérité et à l'amour que l'individu peut atteindre la libération du soi.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La microscopie par fluorescence de cellules vivantes produit de grandes quantités de données. Ces données sont composées d’une grande diversité au niveau de la forme des objets d’intérêts et possèdent un ratio signaux/bruit très bas. Pour concevoir un pipeline d’algorithmes efficaces en traitement d’image de microscopie par fluorescence, il est important d’avoir une segmentation robuste et fiable étant donné que celle-ci constitue l’étape initiale du traitement d’image. Dans ce mémoire, je présente MinSeg, un algorithme de segmentation d’image de microscopie par fluorescence qui fait peu d’assomptions sur l’image et utilise des propriétés statistiques pour distinguer le signal par rapport au bruit. MinSeg ne fait pas d’assomption sur la taille ou la forme des objets contenus dans l’image. Par ce fait, il est donc applicable sur une grande variété d’images. Je présente aussi une suite d’algorithmes pour la quantification de petits complexes dans des expériences de microscopie par fluorescence de molécules simples utilisant l’algorithme de segmentation MinSeg. Cette suite d’algorithmes a été utilisée pour la quantification d’une protéine nommée CENP-A qui est une variante de l’histone H3. Par cette technique, nous avons trouvé que CENP-A est principalement présente sous forme de dimère.